09/09/2025 | Press release | Distributed by Public on 09/09/2025 18:54
For a PDF version of the presentation please click here.
For a PDF version of the transcript please click here.
This preliminary transcript is provided for the convenience of investors only, for a full recording please see the Goldman Sachs Communacopia + Technology conference webcast.
Thomas Kurian, CEO of Google Cloud at the Goldman Sachs Communacopia + Technology Conference on September 9, 2025
Eric Sheridan (Goldman Sachs): Okay. Thanks, everyone, for getting settled. Our next presentation and conversation is going to be with Alphabet with Thomas Kurian, CEO of Google Cloud. I'm going to start with the Safe Harbor, give you a little bit of Thomas' background, bring Thomas up, and he's going to go through some slides and then we'll have a conversation.
Some of the statements Mr. Kurian may make today could be considered forward looking. These statements involve a number of risks and uncertainties that can cause actual results to differ materially. Please refer to Alphabet's Forms 10-K and 10-Q including the risk factors discussed in its Form 10-K filing. Any forward-looking statements Mr. Kurian makes are based on assumptions as of today and Alphabet undertakes no obligation to update them.
Thomas Kurian joined Google Cloud as CEO in November of 2018 bringing deep enterprise experience to the company. He has grown the business into one of the world's largest public clouds with more than $50 billion annual revenue run rate. Thomas is here for the third year in a row. Thomas, welcome to the conference and thanks for coming again.
Thomas Kurian, CEO, Google Cloud: Thank you, Eric. Thank you, all, for having me. Cloud computing continues to grow as the primary vehicle through which enterprises deploy their core information technology systems. Cloud, although it has grown, it's still in its early phase, because a lot of machines and applications still run on premise and have not yet moved. So despite our growth, we see a lot of opportunity ahead.
Over the last two years, organizations are changing in many industries who they choose as their cloud partner. In the past they were focused on application hosting or web hosting. Increasingly they're looking at who can bring the technology and solutions to help them transform their business with artificial intelligence applied in different domains in their organization.
At Google Cloud, we have deep product differentiation because of years of work in AI. As a result of that product differentiation, we're capturing new customers faster, we're deepening our relationships with existing customers and we're growing our total addressable market.
So why are we winning? We provide deep product differentiation in performance, cost, reliability, and efficiency in AI infrastructure.
Second, we provide deep differentiation by offering a leading suite of best-in-class generative AI models.
In order to feed these models, we use our strength and historical strength in data processing, analytics, and security to feed models with high-quality data and keep them safe.
And finally, for many years now, we've been building domain-specific AI applications and agents and that work is now seeing a lot of interest from customers.
So starting with AI infrastructure. You know, we've introduced chips for many years. We're in our 11th year now building AI systems and chips. Our AI systems are optimized for high performance, highly reliable and scalable training. As well as for inference.
For example, if you're running a large-scale cluster, we have two times the power efficiency. Meaning you get two times the FLOPS per watt. And with power now the scarce resource, you get a lot more capacity. We're typically seeing about a 50% performance delta between us and other players, and if you look at the total capacity you can get on a single system, you can get 118 times more throughput through our systems than you can from the next player.
In addition to that, we've integrated high-performance storage for AI-specific storage. That can be used to scale out a cluster much more efficiently, And if you're doing inference, offer incredibly low latency. And our years of work in storage optimization has now seen lots of interest. We've seen a 37 times increase in the volume of data being used in our AI-optimized storage.
We connect all of this with very high bandwidth optical networking. The value of optical networking is you can dynamically change the configuration of a cluster so that you can slice it up differently if you wanted for training and inference without taking an outage, which is hugely important for labs as they see demand shifting from training workloads to inference.
And finally, Google has been at the forefront of most of the software that people are using for training. For example, compilers like JAX, XLA, Pathways, and all this software expertise allows us to optimize the stack.
We see demand from four customer segments. First, is the AI Labs. Nine of the ten leading AI labs in the world, if you take the ten largest, nine of them are our customers. We see demand from traditional enterprises who are deploying AI models. We're seeing demand in capital markets. As capital markets shift from using classical computation for algorithms and are shifting to use inference, the same systems we offer can be used to provide very high frequency calculations.
And we're seeing interest in high-performance computing applications. SSI, a leading lab, is a customer. LG Electronics and LG AI found both performance and cost benefits in using our infrastructure.
On this infrastructure, we offer a suite of models. Not just Alphabet's, but 182 leading models from the industry. Our own models fall in four categories, we offer leading models for large-scale generative AI applications, Gemini. Gemini leads in many dimensions: performance, cost, quality, factuality, the ability to do very sophisticated kinds of reasoning. It's used by 9 million developers to build applications.
And just to give you a sense, compared to 1.5 which we launched in January of this year, 2.5, our latest model, reached a trillion tokens 20 times as fast. So we're seeing large-scale adoption of Gemini by the developer community. In addition to that, we offer a leading suite of what's called diffusion models. Diffusion models create images, video, audio, speech, et cetera.
We've added a third set of models around scientific computation. For example, our time series model is used by many firms in financial services to do numerical prediction of sequences. Molecular design, we offer a model to help people design molecules which is getting a lot of interest in the pharmaceutical industry. So there's a whole range of models.
And as people switch from just using a raw model to building an agent, we've introduced based on our history in leading many open-source projects, something called Agent Development Kit which is a platform to help people build agents. It is by far the leading agent development platform in the industry supported by over 120 companies.
To give you a sense of the scale, if you compare us to other hyperscalers, we are the only hyperscaler that offers our own systems and our own models. And we're not just reselling other people's stuff. The volume of tokens we process, twice the other providers in half the time. So roughly four times the volume.
We have a lot of different companies using these AI models from companies creating digital products to using AI within their organization. Canva is an example of a company using our diffusion models to create image and video content. ServiceNow is one of many SaaS companies that use our model Gemini. And the reason they're using it is not only does it give them great performance and quality and latency, but it can also be deployed in four different configurations: in a cloud, in a classified environment, out on the edge, and also now on top of any Nvidia cluster. In the past if you wanted to run a model in your data center, you had to use an open-source model because you had to give up the weights of it. And we're the only ones that offer that as well.
When you use models, you need to feed them with high-quality data. And as you put more and more of your company's information into the model, you need to keep the model secure.
So our history and expertise in large-scale data platforms has helped us as well as our focus in building security products. So we allow people to migrate data, clean it, prepare it, and feed it into the models using our data cloud.
Second, we provide incredible low-latency connectivity between our analytic and database platforms and models running on our Cloud. Allowing people to use models to process information from our data platforms.
Third, as people want to understand data and use models to reason on this data, we've introduced new data science and conversational analytical agents. Think of it as vibe coding with your data, it's much easier for anybody to ask questions in natural language and do data analysis and also create data science models.
All that is driving growth in our data platforms. To give you a sense, we've seen a 27 times increase in the volume of data processed in our data cloud, BigQuery, with Gemini. And we've seen that BigQuery which normally when people think of data warehouses, they think of things that handle numbers and tables. Now it's also being used to store and process unstructured data. We have many more customers than some of the pure play providers. And our strength in security is now applied to AI models.
We protect your data. We have new solutions to protect models themselves so when you load your data into models, you don't get a compromise of the model. And third, we also protect organizations with new advances that we've introduced from threats introduced using AI models to attack systems. All that has driven growth with a lot of different customers from regulated industries to commercial enterprises to small businesses.
Two quick examples: Radisson Hotels, they took all their data for customer segmentation and all data from the hotels, consolidated in a data cloud, and used Gemini and our diffusion models to create advertising. Virgin Media is using the same combination but using it to improve the speed of decision making and data engineering within their organization.
Lastly, we started our work to build domain-specific enterprise agents in 2021. So we've worked on it for four years now. We focused in five areas. Agents to help software engineers write code, our Gemini Command Line Interface AI Agent which we introduced on June 24th has grown to close to a million users already.
We allow people to build domain-specific agents. For example, for marketers to create content. Customer service teams to handle customer service interactions. We've seen strong growth, for example, in our customer service technology with a 10 times growth in chat and voice interactions.
We're also building domain-specific agents for specific industries. For example, to help people do shopping and commerce. Today we roughly handle 5 billion commerce transactions through our Commerce Agent. And we make all of these agents as well as any bespoke one that people want to build available to a single platform we call Agentspace, which provides a single panel for a company to access and use all of the AI technology within the organizations.
We're seeing growth and broadening of our addressable market by applying AI now in domains that IT departments historically didn't serve. Marketing, customer service, commerce, et cetera. Mercado Libre is one of the largest e-commerce systems in Latin America. They use our shopping and commerce technology. Wells Fargo uses Google Agentspace to help their employees use AI from trade management, contract management, and other domains.
So our deep product differentiation has driven the growth that we're seeing in customers. Now, how are we taking all of this to market? We're doing it -- there are five important things. First of all, we monetize AI in five different ways. We're seeing growth from net new customers, we're seeing deeper relationship with existing customers, we're broadening our addressable market.
As a result of that we're seeing a growth in revenue, our remaining performance obligations of backlog and operating margin. The five ways we monetize AI. Some people pay us for some of our products by consumption. So if you use our AI infrastructure whether it's a GPU, TPU, or use a model you pay by token, meaning you pay by what you use.
Some of our products people pay for by subscription. You pay a per-user, per-month fee. For example, Agentspace or Workspace. Some monetization comes by increased product usage. So if you use our cybersecurity agent and run a threat analysis using AI, we're seeing huge growth in that. Example we're over 1.5 billion threat hunts, we call it, using Gemini, and that drives more usage of our security platform. Similarly we see growth in our data cloud.
We also monetize some of our products through value-based pricing. For example, some people use our customer service system; say I want to pay for it by deflection rates that you deliver. Some people use our creative tools to create content; say I want to pay based on what conversion I'm seeing in my advertising system.
And then finally, we also upsell people as they use more of it from one version to another, because we have higher-quality models, more quota, and other things in higher priced tiers. Because of this, we're capturing new customers faster. As I said, we've seen 28% sequential quarter-over-quarter growth in new customer wins in the first half of this year. Nine of the ten top ten AI labs and nearly all the AI unicorns are our customers.
We're deepening our relationship with existing customers; 65% of our customers are already using our AI tools in a meaningful way. Those customers that use our AI tools typically end up using more of our products. For example, they use our data platform or our security tools. And on average, those that use our AI products use 1.5 times as many products than those that are not yet using our AI tools. And that leads then customers who sign a commitment or a contract to over-attain it, meaning they spend more than they contracted for, which drives more revenue growth.
Finally, we're growing and diversifying our revenue. Our revenue does not come from a single product line. We have many different product lines, all of them growing. And as Sundar and Anat, our CFO, have both mentioned, we've made billions using AI already.
We're growing revenue while bringing operating discipline and efficiency. So our remaining performance obligation, or backlog is sometimes referred to, is now at $106 billion. It is growing faster than our revenue. More than 50% of it will convert to revenue over the next two years. So not only are we growing revenue, but we're also growing our remaining performance obligation.
We're also very focused on operating discipline to improve operating margins. The three big areas of focus: One is making sure we're super efficient from the point of view of using our fleet and our machines so that we get capital efficiency. There's many hundreds of projects that people have done to optimize. And the larger the fleet generally the more efficient you get because you need less buffer for any individual customer. You've also seen a study that some of our scientists published on the improvements in inferencing we've done with a 33 times efficiency in inference using some of the models over the last year.
So there's a lot of focus on continuing to optimize our fleet. We're improving our go-to-market organization, now has a large customer base to sell to. And selling to existing customers is always easier than selling to new customers, so it helps us improve the cost of sales as a percentage of revenue.
And finally, we're also building on a large suite of products already. So it helps us improve our engineering productivity. You see that in our results. We're growing top line and operating income.
In closing, we've spent years building advanced AI technology of our own: chips, systems, tools, agents. We made those bets very early, much of the work that you see today has been underway for many, many years. And we're not just reselling third-party technology. So why we're winning is because we see this deep product differentiation now being adopted by customers.
That's leading us to win new customers, deepen our relationship with existing customers, and broaden our addressable market. And in turn, that's leading us to grow revenue and operating income. Thank you.
Eric Sheridan (Goldman Sachs): Thank you, Thomas. Thanks so much for a lot of good stuff in there. So I want to come back to where you started the presentation talking about the state of the industry today. So as we exit '25, we're looking towards 2026, where are we in terms of Cloud adoption, client usage trends, and how is Google Cloud evolving in terms of that competitive landscape and those secular growth themes?
Thomas Kurian, CEO, Google Cloud: Cloud adoption is still in its early stages. If you count servers, depending which analysts you read, still a vast majority run on premise and people's data centers. So there's a lot of remaining opportunity ahead for people to migrate these workloads, to modernize them, to transform them.
There are different adoption patterns we see in different industries. Some are moving more quickly. Some, for example government agencies, some of them move a bit slower because of compliance and other regulations. Europe has been generally slower to move because of sovereign cloud requirements, we've now introduced them. So we are starting to see many different drivers for people to pick that up.
But in the past, people chose cloud primarily as a mechanism to get developer efficiency. Meaning you can get infrastructure on demand and to host applications, and to save money in hosting applications by consolidating compute and storage. And that's, you know, that continues to be important, but that's not the primary driver. The big driver now is I really want to transform the organization. Can you help me by bringing AI expertise and products to help me?
Eric Sheridan (Goldman Sachs): So with that as a jumping off point, when you sit and look at the enterprise landscape today and the way enterprises are adopting AI, put a finer point on your presentation in terms of how those trends inform your strategic priorities as a company.
Thomas Kurian, CEO, Google Cloud: So we see organizations using AI in four domains. Some companies are using it to build digital products. Natura Cosmetics, Snap, the work we did with Warner Brothers to create the Wizard of Oz. Those are all essentially using AI to advance a digital product.
Others are using it to transform customer service. And when I say transform customer service, it's not just in the call center as we do with Verizon, but at the point of sale as we do with Wendy's, in a vehicle as we're showing with Mercedes today in Munich. So there's many different places where people see that customer interface transformation.
Others are using it to streamline the core of the company in the back office. When I say the back office, Home Depot is using it to answer HR help desk. When employees ask questions regarding benefits and other things, they're using our agent to answer those questions. AES which is a large energy company streamlined their regulatory and audit process, reducing the cycle time. Tyson Foods is using it in supply chain.
And then finally, we've seen a lot of organizations now using it in their IT departments. And in their IT departments, broad brush, there's people using it to write code and not just to write code but to improve the quality of code being written, there are people who are using it for cyber, because cyber generally there's a bottleneck in terms of how many cyber analysts you have. And these AI tools can be used to both help you identify and prioritize what threats are occurring and then much more quickly analyze if you've been compromised. So those are the four big domains that we see AI being adopted for.
Eric Sheridan (Goldman Sachs): Okay. When you think about your full stack approach to AI, talk to us a little bit about how that might be creating competitive advantages in the marketplace and how does it help you translate into winning deals?
Thomas Kurian, CEO, Google Cloud: It's a great question. Our stack is open. Meaning we offer our own accelerators. We have a super close working relationship with Nvidia, because people want a choice of different types of configurations of systems. Same thing with our models, we offer our own as well as third party.
What it helps us do, though, is we can optimize things differently up and down. So I'll give you an example. If you look at the work we do with capital markets, applying AI to synthesize data from information sources and then use it to actually feed algorithmic models. You need a certain set of skills to reason on it, you need a certain set of capability to choose the right tool, and you need to be able to do it with ultra-low latency. So that combination of things that we bring from the enterprise, surprisingly at the model level it's the same thing you need to have a great coding tool.
It turns out if you want to do software engineering, you have to choose the right tool for the right task, you want to be able to generate code with low latency so when you do autocompletion, it happens.
And it's also the same thing that applies in certain circumstances on the Search side. So the fact that we're able to get all of these different design centers and we're using one model series for all of Alphabet as well as our customers helps improve the model. And then because we're optimizing that model up and down, as we had Jeff Dean and our team talk about how much more efficient we've become on serving, it also helps us optimize the cost of inference in serving. So we can codesign things. We get leverage because of all of the domains we're serving both across enterprise and the consumer side. And we can also optimize the cost structure when we deliver these things.
Eric Sheridan (Goldman Sachs): Okay. Building on that theme, in your presentation you touched upon the idea of your AI infrastructure and building advantage and scale around that. Talk a little bit about where custom silicon and TPUs makes sense as opposed to working with external suppliers. And talk about some of the key learnings of customers who have used TPUs and the use cases where they deploy them.
Thomas Kurian, CEO, Google Cloud: So you know we, broad brush, I think when people look at models, they think there's one type of model. There are many different types of models. There's dense models, mixture of experts, sparse models, do you need a sparse core or not. So we offer a range of accelerators.
Where people really choose the right thing for their model is based on a variety of factors. It often comes down to the experts sitting down and actually trying it. And we see four key things: First one is: Are you doing a kind of hero model run? And if you're running a hero model run, it's typically on a giant cluster that you want to scale out. And they care a lot on the FLOPS per dollar, meaning how many are you getting per dollar?
How efficient are you able to load your dataset into memory, so how much HBM do you have? Are all the nodes in the cluster communicating with super predictable latency, which is where the optical network comes in. And then can you then use certain things like the compiler to really optimize what at the bottom level is equivalent of an instruction set.
And so the TPU is seen as really attractive by many of the leading labs because it gets their training runs to get much more throughput through the system. It's also being used by people with inference and we have very close working relationship with Nvidia to allow us to allow customers to train on TPU, serve on GPU or vice versa. There's a lot of things we've optimized with Nvidia. For example, JAX is optimized not just on TPU but GPU. So it's not just the infrastructure but the entire software layer on top.
Eric Sheridan (Goldman Sachs): Got it. Understood. You laid out a lot of initiatives on the product side, the platform side all leading to the types of growth you're seeing today. What are the biggest priorities for investments in the business in support of that growth?
And how do you think of getting that mix right between striking the right balance on investments and driving growth?
Thomas Kurian, CEO, Google Cloud: We look at investments in three big categories. Obviously our supply chain and capital investments which span data centers, power, long-term power contracts, what we're doing with our different geographical locations, because inference now needs to be in many different countries for sovereignty reasons. So one is our capital infrastructure and we've had a team for years and years do that at enormous scale. And we continue to do that. And we're very thoughtful on how we're doing it.
In each area, we also look at how do we get more efficient. For example, we're constantly optimizing, one example is as you get these more powerful chips, they also take a lot more power, and power is in many cases a short resource. We have the most efficient PUE in the world. PUE is how much power are you consuming to create X amount of FLOPS. And we invested very early in water cooling, and water cooling gives you another lift in throughput through these systems. That's an example of where we said, hey, there's likely to be a power issue. Let's design early a set of solutions. And that's helped us with an advantage there.
We invest in products. So we're very thoughtful and disciplined in which domains are we solving and what do we want to make in products.
And then we invest in the go-to-market organization. And the go-to-market organization, you know, when I started at Google, nobody thought we'd be where we are. The first several years almost all our sales were to brand new customers, and it's difficult to win them, but we've actually won many of them. Now we have teams that know how to sell specialization for specific products. We know how to sell to existing customers. We have a different model to sell to new customers. So all that sophistication has been built over many years.
Eric Sheridan (Goldman Sachs): You also talked in your slides about how the margins continue to build in the reported segment behind Google Cloud. Talk to us a little bit about not only getting that right on the growth side but you talked a little bit about it driving efficiencies and continue to make progress on the margin side of the business over the long-term as well.
Thomas Kurian, CEO, Google Cloud: There's a lot of people working really hard to continue to improve top line and operating margin. Some of it is, you know, down to really fundamental things. If you look at us, we made some decisions early to say we're going to build our own chips, our own models, and also products around the models.
So that gives us, when you're not just distributing somebody else's stuff, you obviously can optimize cost and improve margins. Even when we look at examples of products we've built around the model. In 2021, we saw a lot of companies talking to us about the call centers were shut down because of COVID and they could not handle the volumes of calls coming in, so we said let's build an AI-powered customer service system.
That's now being used at a large scale, and that's an example of something that's a very differentiated product. It's not just here's a model and access it through an API, there's a lot of capability we built into that. Those decisions that we made early and years of continued effort both in the past and in the future, we're very focused on that. Both improving top line and operating income.
Eric Sheridan (Goldman Sachs): I'll try to squeeze one last one in. When you think about the deployment of AI that's happening right now in the ecosystem and you look at the infrastructure layer, the model layer, the application layer, where are you seeing the most exciting things being deployed that could be elements of driving growth in your business over the medium to long-term?
Thomas Kurian, CEO, Google Cloud: I think we see a lot of interest in, it's sort of we're roughly in every six-month cycle. And what I mean by that is we find that a major model revision opens up an entire new category of capability, and that in turn drives us to build value-added products on top of it.
And so it's roughly, we're in a six-month iteration. So for instance, if you look at Veo 3, it's an amazing video creation system. So we have enormous interest from advertising companies, creative labs, media companies, movie studios, et cetera. That market did not exist prior to Veo reaching that level of breakthrough.
And then we take that because we're co-engineering it with Google DeepMind, we're able to build an entire set of assets around it, as product, that people can use to apply to specific domains. That's roughly the cycle we're on, it may get faster. A lot of it depends on what kind of breakthroughs we're working on.
Eric Sheridan (Goldman Sachs): Great example. I think we're going to have to leave it through. Thomas, thank you so much for coming to the conference this year. Please join me in thanking Thomas and the Alphabet team for being part of the conference.