03/26/2026 | Press release | Distributed by Public on 03/26/2026 09:54
Your browser does not support the audio element.
Today, we're launching Gemini 3.1 Flash Live via the Gemini Live API in Google AI Studio. Gemini 3.1 Flash Live helps enable developers to build real-time voice and vision agents that can not only process the world around them, but also respond at the speed of conversation.
This is a step change in latency, reliability and more natural-sounding dialogue, delivering the quality needed for the next generation of voice-first AI.
For real-time interactions, every millisecond of latency strips away the natural flow of the conversation that users expect. The new model better understands tone, emphasis and intent, enabling agents with key improvements:
Developers are actively building voice agents that communicate with a natural flow and pace and take actions reliably with Gemini Flash Live models. Here are a few examples of real-world apps that use the model to power their conversational interactions:
Using the Gemini Live API, Stitch now enables its users to vibe design with their voice. The agent can 'see' the canvas and selected screens and give design critiques, build variations and more.
In this demo, AI companion device for older adults, Ato, uses Gemini 3.1 Flash Live's multilingual capabilities to turn daily conversations into real connections for its users.
See how the Weekend team integrates Gemini 3.1 Flash Live's strong characterization and human-like delivery to add a unique theatrical flair to the Game Master in their RPG - Wit's end.
The Live API is built for production environments, but real-world systems require handling of diverse inputs, from live video streams to on-demand phone calls.
For systems that require WebRTC scaling or global edge routing, we recommend exploring our partner integrations to streamline the development of real-time voice and video agents.
Gemini 3.1 Flash Live is available starting today via the Gemini API and in Google AI Studio. Developers can use the Gemini Live API to integrate the model into their application.
Explore our developer documentation to learn how you can build real-time agents:
Get started with the Google GenAI SDK: