

A Wall Street Journal report published Friday said Nvidia insiders raised reservations about the deal and that Huang had privately criticized what he called a lack of discipline in OpenAI’s business strategy. The Journal also said Huang had voiced concern about competition from Google and Anthropic. Huang dismissed those assertions as “nonsense.”
Nvidia shares slipped about 1.1 percent on Monday after the stories. Sarah Kunst, managing director at Cleo Capital, told CNBC the exchange was unusual. “One of the things I did notice about Jensen Huang is that there wasn’t a strong ‘It will be $100 billion.’ It was, ‘It will be big. It will be our biggest investment ever.’ And so I do think there are some question marks there.”
In September, Bryn Talkington, managing partner at Requisite Capital Management, noted to CNBC the circular nature of these investments. “Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia,” Talkington said. “I feel like this is going to be very virtuous for Jensen.”
Tech critic Ed Zitron has long criticized Nvidia’s circular investments, which touch dozens of tech firms, from major companies to startups. They are also largely Nvidia customers.
“NVIDIA seeds companies and gives them the guaranteed contracts necessary to raise debt to buy GPUs from NVIDIA,” Zitron wrote on Bluesky last September, “Even though these companies are horribly unprofitable and will eventually die from a lack of any real demand.”
Chips from other sources
Beyond sourcing GPUs from Nvidia, OpenAI reportedly explored partnerships with startups Cerebras and Groq, both of which produce chips aimed at cutting inference latency. In December, Nvidia struck a $20 billion licensing agreement with Groq, which Reuters sources say ended OpenAI’s talks with the company. Nvidia also hired Groq’s founder and CEO Jonathan Ross and other senior leaders as part of that arrangement.
In January, OpenAI announced a $10 billion deal with Cerebras instead, adding 750 megawatts of computing capacity to accelerate inference through 2028. Sachin Katti, who joined OpenAI from Intel in November to lead compute infrastructure, said the partnership brings “a dedicated low-latency inference solution” to OpenAI’s platform.
OpenAI has clearly been hedging its bets. In addition to the Cerebras agreement, the company struck an agreement with AMD in October for six gigawatts of GPUs and announced plans with Broadcom to develop a custom AI chip to reduce dependence on Nvidia. When those chips will be available remains unknown.