
Nvidia stated on Tuesday that its technology remains one generation ahead of the competition, addressing Wall Street’s worries that the company’s leadership in AI infrastructure might be challenged by Google’s AI processing units.
“We appreciate Google’s achievements — they have made significant progress in AI and we continue to provide support to Google,” Nvidia shared in a post on X. “NVIDIA is leading the industry by a generation — it is the sole platform that manages every AI model and operates wherever computing takes place.”
The statement followed Nvidia experiencing a 3% drop in its stock on Tuesday after a report indicated that Meta, a major client, may be negotiating with Google to utilize its tensor processing units in their data centers.
In its communication, Nvidia highlighted that its chips are more adaptable and powerful than so-called ASIC chips — like Google’s TPUs — which are tailored for a specific company or task. The latest version of Nvidia’s chips is known as Blackwell.
“NVIDIA delivers superior performance, flexibility, and interchangeability compared to ASICs,” Nvidia commented in its release.
Nvidia maintains over 90% of the market share for AI chips utilizing its graphics processors, analysts claim; however, Google’s internal chips have gained attention recently as a competitive option to the costly but powerful Blackwell chips.
In contrast to Nvidia, Google refrains from selling its TPU chips to other entities but employs them for its own internal needs and offers them for lease via Google Cloud.
Earlier this month, Google unveiled Gemini 3, a highly regarded cutting-edge AI model that was developed using the company’s TPUs, not Nvidia GPUs.
“We are witnessing a surge in demand for both our dedicated TPUs and Nvidia GPUs,” a representative from Google stated. “We remain committed to supporting both, as we have consistently done for years.”
Nvidia CEO Jensen Huang discussed the growing TPU competition during an earnings call earlier this month, emphasizing that Google has been a customer for Nvidia’s GPU chips and that Gemini can operate on Nvidia’s technology.
He also noted that he remains in contact with Demis Hassabis, the CEO of Google DeepMind.
Huang relayed that Hassabis texted him to confirm that the prevailing theory in the tech realm—that utilizing more chips and larger datasets will foster more powerful AI models, commonly referred to as “scaling laws” by AI developers—remains “valid.” Nvidia contends that these scaling laws will result in an increasing demand for the company’s chips and systems.
WATCH: Meta reportedly in negotiations to utilize Google’s AI chips