Google LLC

03/26/2026 | Press release | Distributed by Public on 03/26/2026 09:54

Build real-time conversational agents with Gemini 3.1 Flash Live

Your browser does not support the audio element.

Listen to article
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes
Voice Speed
Voice
Speed 0.75X 1X 1.5X 2X

Today, we're launching Gemini 3.1 Flash Live via the Gemini Live API in Google AI Studio. Gemini 3.1 Flash Live helps enable developers to build real-time voice and vision agents that can not only process the world around them, but also respond at the speed of conversation.

This is a step change in latency, reliability and more natural-sounding dialogue, delivering the quality needed for the next generation of voice-first AI.

Experience enhanced latency, reliability and quality

For real-time interactions, every millisecond of latency strips away the natural flow of the conversation that users expect. The new model better understands tone, emphasis and intent, enabling agents with key improvements:

  • Higher task completion rates in noisy, real-world environments: We've significantly improved the model's ability to trigger external tools and deliver information during live conversations. By better discerning relevant speech from environmental sounds like traffic or television, the model more effectively filters out background noise to remain reliable and responsive to instructions.
  • Better instruction-following: Adherence to complex system instructions has been boosted significantly. Your agent will stay within its operational guardrails, even when conversations take unexpected turns.
  • More natural and low-latency dialogue: The latest model improves on latency and is even more effective at recognizing acoustic nuances like pitch and pace compared to 2.5 Flash Native Audio, making real-time conversations feel a lot more fluid and natural.
  • Multi-lingual capabilities: The model supports more than 90 languages for real-time multi-modal conversations.

See the Gemini Live API in action

Developers are actively building voice agents that communicate with a natural flow and pace and take actions reliably with Gemini Flash Live models. Here are a few examples of real-world apps that use the model to power their conversational interactions:

Using the Gemini Live API, Stitch now enables its users to vibe design with their voice. The agent can 'see' the canvas and selected screens and give design critiques, build variations and more.

In this demo, AI companion device for older adults, Ato, uses Gemini 3.1 Flash Live's multilingual capabilities to turn daily conversations into real connections for its users.

See how the Weekend team integrates Gemini 3.1 Flash Live's strong characterization and human-like delivery to add a unique theatrical flair to the Game Master in their RPG - Wit's end.

Build with an expanding ecosystem of integrations

The Live API is built for production environments, but real-world systems require handling of diverse inputs, from live video streams to on-demand phone calls.

For systems that require WebRTC scaling or global edge routing, we recommend exploring our partner integrations to streamline the development of real-time voice and video agents.

Get started with the Live API

Gemini 3.1 Flash Live is available starting today via the Gemini API and in Google AI Studio. Developers can use the Gemini Live API to integrate the model into their application.

Explore our developer documentation to learn how you can build real-time agents:

  • Gemini Live API documentation: Explore features like multilingual support, tool use and function calling, session management (for managing long running conversations) and ephemeral tokens.
  • Gemini Live API examples: Get inspiration for the kind of voice experiences you can build today with the model.
  • Gemini Live API Skill: For coding agents to learn and build with the Live API.

Get started with the Google GenAI SDK:

POSTED IN:
Google LLC published this content on March 26, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 26, 2026 at 15:54 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]