
Sundar PichaiCEO, Google
Sundar Pichai is the engineer-turned-CEO leading Google through what he calls an extraordinary AI inflection point, balancing trillion-dollar infrastructure bets with safety, climate, and long-term societal impact.
Founder Stats
- AI, Technology
- Started 2015 or earlier
- $1M+/mo
- 50+ team
- USA
About Sundar Pichai
Sundar Pichai leads Google through what he calls an extraordinary AI inflection point. In this interview he breaks down how Google is investing at massive scale, how he thinks about bubbles, energy and safety, and why emotional courage and adaptability matter for the next generation of builders and leaders.
Interview
December 12, 2025
How would you describe what is happening in Silicon Valley right now as a business leader?

It is an extraordinary moment even by Silicon Valley standards. About every decade you get an inflection point. We had the personal computer, then the internet in the late ’90s, then mobile, then cloud. Now it is clearly the era of AI. You can feel that excitement across campuses and across the whole region.
What does the current AI wave look like in terms of investment and scale for Google and your peers?

One way to see the scale is to look at capital spending for AI infrastructure. Four years ago Google spent under 30 billion dollars per year. This year it will be over 90 billion. If you add up what all the major companies are doing, it is well over a trillion dollars going into building for this AI moment.
Do you think this AI boom is a bubble from a business perspective?

I see two sides. On one side, model capabilities are improving fast, people are using them, and we see real demand. We are even constrained in serving that demand. So the excitement is rational. At the same time, in every investment cycle industries overshoot. The internet had excess investment too, yet nobody questions its long-term impact now.
How do you manage the risk of over-investing at Google in such an aggressive buildout?

No company is immune from over-investment, including us. If we overinvest, we will have to work through that phase. What helps is our full-stack, long-term approach. From the underlying physical infrastructure and research to Search, YouTube, Android and our other products, we can spread the investments and focus on value that compounds across the whole ecosystem.
What is your core strategy for building AI inside Google?

When I became CEO, one of the first things I did was to make Google an “AI-first” company. For us that means a full-stack approach. We invest from custom infrastructure and chips all the way to the research that pushes AI forward, and then into products and platforms. Doing all those parts together is how we build durable advantage.
How do you see AI agents changing the way people and businesses work in the next few years?

Right now people can ask questions and have intelligent exchanges. In the next twelve months you will see AI doing more complex tasks. Things like buying a birthday gift, helping evaluate an investment, or explaining treatment options. These agentic experiences can support decisions and workflows, saving time and making people more effective in their personal and professional lives.
When you talk about automation, how do you balance productivity gains with worries about jobs?

I think about it like earlier household technologies. When my family got our first refrigerator, it changed my mom’s life. It automated some tasks but freed her to do other things. Today a radiologist faces more scans and more images every year. AI can help them cope with that demand. It is about support and augmentation, not just replacement.
What practical career advice would you give parents and students trying to navigate this AI era?

I would not change the basic advice. A wide variety of disciplines will still matter. What I would add is: embrace the technology and learn to use it in your field. Teachers, doctors, lawyers, creators will still be needed, but the people who do best will be those who adapt and work well with AI as part of their toolkit.
How should professionals think about using tools like Gemini that are not always accurate?

We work hard to ground models in real-world information, including using Google Search as a tool inside Gemini. But these systems predict the next token, so they are prone to errors. People should use them with that in mind. They are great for creativity and exploration. For high-stakes facts, you should cross-check and not blindly trust every answer.
What leadership responsibility do you feel around truth and information quality in the AI age?

Truth matters. Journalism matters. Human experts matter. If you only had stand-alone AI systems, information could become less reliable. That is why the overall information ecosystem must be richer than just AI. Search, trusted sources, teachers, doctors and institutions all stay important. Our responsibility is to build AI that fits into and strengthens that ecosystem, not replaces it.
AI needs huge amounts of energy. How do you reconcile that with climate goals as a global CEO?

I do not see it as a pure trade-off. The energy demand from AI is very large, so we are investing heavily in new sources: fusion, small modular nuclear, geothermal, solar and battery technology. The pressure from AI buildout is actually accelerating innovation and capital going into cleaner, more abundant energy, which can support both growth and climate goals.
Your 2030 climate targets are under pressure from AI growth. How do you handle that tension?

We still have our 2030 goals and publish progress. It is true that faster-than-expected AI growth makes the path harder. Our answer is to meet the moment by pushing new energy technologies and smarter infrastructure. It is about scaling both digital systems and green energy together instead of slowing one to protect the other, even if the path is not linear.
What kind of relationship do you think big AI companies like Google should have with governments right now?

This is an extraordinary moment for economies and for national security. As one of the leading AI companies, we feel a deep responsibility to engage constructively. That means working with the White House, the UK government and others to align on action plans, think carefully about misuse risks, and create industry-wide frameworks that make the technology broadly beneficial.
As an immigrant CEO, how do you see immigration in relation to innovation and growth at Google?

If you look at many of the fundamental breakthroughs in technology, immigrants are heavily represented. At Google we have had Nobel prizes and key contributions from immigrants. I came to the United States on an H-1B visa myself. I think governments understand this value and are trying to fix shortcomings so companies can keep attracting and retaining global talent.
How do you balance moving fast with AI and still being responsible on safety inside Google?

There is real tension. The technology is moving fast and users expect us to use AI to answer more complex questions on their phones. We must move quickly to meet that demand. At the same time, we have increased investment in AI safety and security, including tools to detect AI-generated content. We try to grow boldness and responsibility in parallel.
Are you worried about one company owning AI, like an "AGI dictatorship"?

I agree with the concern in principle. No single company should own a technology as powerful as AI. But if you look at the ecosystem today, there are many frontier models, strong open-source efforts and serious work in multiple countries, including China. We are very far from a world where everyone depends on only one AI provider for everything.
Looking ahead, what gives you personal optimism about technology and human adaptation?

We often take progress for granted. We used to talk a lot about the Turing test and now we have gone past it. In San Francisco there are driverless cars on the road. I recently sat my eighty-year-old father in a Waymo car and saw his sense of wonder. Moments like that remind me humans can adapt and benefit from change.
Table Of Questions
Video Interviews with Sundar Pichai
Full interview: Google CEO Sundar Pichai on the ‘AI boom’ and the future of AI
Related Interviews

Ben Mann
Tech Lead, Product Engineering at Anthropic
Ben, Tech Lead for Product Engineering at Anthropic, has played a pivotal role in scaling AI products with a deep focus on safety and alignment. From leaving OpenAI to co-founding Anthropic’s culture of mission driven growth, his journey is one of strategic leadership and foresight in a rapidly evolving industry.

Mark Shapiro
President and CEO at Toronto Blue Jays
How I'm building a sustainable winning culture in Major League Baseball through people-first leadership and data-driven decision making.

Harry Halpin
CEO and Co-founder at Nym Technologies
How I'm building privacy technology that protects users from surveillance while making it accessible to everyone, not just tech experts.