Results

Google LLC

03/10/2026 | Press release | Distributed by Public on 03/10/2026 10:38

Gemini Embedding 2: Our first natively multimodal embedding model

Today we're releasing Gemini Embedding 2, our first fully multimodal embedding model built on the Gemini architecture, in Public Preview via the Gemini API and Vertex AI.

Expanding on our previous text-only foundation, Gemini Embedding 2 maps text, images, videos, audio and documents into a single, unified embedding space, and captures semantic intent across over 100 languages. This simplifies complex pipelines and enhances a wide variety of multimodal downstream tasks-from Retrieval-Augmented Generation (RAG) and semantic search to sentiment analysis and data clustering.

New modalities and flexible output dimensions

The model is based on Gemini and leverages its best-in-class multimodal understanding capabilities to create high-quality embeddings across:

  • Text: supports an expansive context of up to 8192 input tokens
  • Images: capable of processing up to 6 images per request, supporting PNG and JPEG formats
  • Videos: supports up to 120 seconds of video input in MP4 and MOV formats
  • Audio: natively ingests and embeds audio data without needing intermediate text transcriptions
  • Documents: directly embed PDFs up to 6 pages long

Beyond processing one modality at a time, this model natively understands interleaved input so you can pass multiple modalities of input (e.g., image + text) in a single request. This allows the model to capture the complex, nuanced relationships between different media types, unlocking more accurate understanding of complex, real-world data.

Like our previous embedding models, Gemini Embedding 2 incorporates Matryoshka Representation Learning (MRL), a technique that "nests" information by dynamically scaling down dimensions. This enables flexible output dimensions scaling down from the default 3072 so developers can balance performance and storage costs. We recommend using 3072, 1536, 768 dimensions for highest quality.

To see these embeddings in action, try out our lightweight multimodal semantic search demo.

State-of-the-art performance

Gemini Embedding 2 doesn't just improve on legacy models. It establishes a new performance standard for multimodal depth, introducing strong speech capabilities and outperforming leading models in text, image, and video tasks. This measurable improvement and unique multimodal coverage give developers exactly what they need for their diverse embedding needs.

Unlocking deeper meaning for data

Embeddings are the technology that power experiences in many Google products. From RAG where embeddings can play a crucial role in context engineering to large-scale data management and classic search/analysis, some of our early access partners are already using Gemini Embedding 2 to unlock high-value multimodal applications:

Start building today

Get started with the Gemini Embedding 2 model through Gemini API or Vertex AI.

Learn how to use the model in our interactive Gemini API and Vertex AI Colab notebooks. You can also use it through LangChain, LlamaIndex, Haystack, Weaviate, QDrant, ChromaDB, and Vector Search.

By bringing semantic meaning to the diverse data around us, Gemini Embedding 2 provides the essential multimodal foundation for the next era of advanced AI experiences. We can't wait to see what you build.

POSTED IN:
Google LLC published this content on March 10, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 10, 2026 at 16:39 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]