
The report released today is filled with eye-catching statistics. Much of the insight stems from data that corroborates intuitive feelings you may already hold, like the perception that the US is pursuing AI more aggressively than any other nation: it operates 5,427 data centers (and still counting). That’s over ten times as many as the next leading country.
It also serves as a reminder that the AI sector’s hardware supply chain has significant points of contention. Perhaps the most astonishing fact is: “A single company, TSMC, manufactures nearly every major AI chip, causing the global AI hardware supply chain to be reliant on one foundry located in Taiwan.” Just one foundry! That’s simply astonishing.
However, the primary insight I glean from the 2026 AI Index is that the current landscape of AI is riddled with contradictions. As my colleague Michelle Kim articulated today in her article regarding the report: “If you’re keeping up with AI updates, you’re likely experiencing whiplash. AI is a gold rush. AI is a bubble. AI is eliminating jobs. AI struggles to read a clock.” (The Stanford report observes that Google DeepMind’s leading reasoning model, Gemini Deep Think, secured a gold medal in the International Math Olympiad yet often cannot read analog clocks.)
Michelle provides an excellent summary of the report’s significant points. Yet, I wanted to focus on a question that’s been lingering in my mind. Why is it so challenging to grasp what’s truly happening in AI at this moment?
The most pronounced disparity appears to be between specialists and the general populace. “AI scholars and the broader public perceive the trajectory of this technology quite differently,” the authors of the AI Index state. “When evaluating AI’s effects on employment, 73% of U.S. specialists are optimistic, contrasted with merely 23% of the public, resulting in a 50 percentage point difference. Similar discrepancies arise concerning the economy and healthcare.”
That’s a massive divergence. What is the cause? What insights do experts possess that the public lacks? (“Experts” refers to U.S.-based researchers who participated in AI conferences during 2023 and 2024.)
I suspect part of the issue stems from the differing experiences that experts and non-experts draw upon to form their opinions. “Your level of admiration for AI is directly linked to how extensively you use AI for coding,” a software developer remarked on X recently. This could be half in jest, but there’s undoubtedly a kernel of truth in it.
The newest models from leading labs are presently outperforming previous versions when it comes to generating code. Since technical tasks like coding have definitive correct answers, training models to execute them is comparatively simpler than for tasks that are more flexible. Additionally, models capable of coding have shown to be lucrative, prompting developers to invest heavily in their enhancement.
This indicates that individuals utilizing these tools for programming or other technical tasks are experiencing this technology at its peak. Beyond those applications, the results are more mixed. LLMs still make fundamental errors. This situation has been termed the “jagged frontier”: Models excel in certain areas while performing poorly in others.
The prominent AI researcher Andrej Karpathy shared additional thoughts. “Based on my [timeline], there’s an expanding chasm in the comprehension of AI capabilities,” he responded to that post on X. He pointed out that power users (defined as those who employ LLMs for programming, mathematics, or research) not only stay informed about the latest models but are often willing to spend $200 monthly for the premium versions. “The recent advancements in these sectors this year have been nothing short of astounding,” he remarked.
Given that LLMs are continuously advancing, an individual paying for the use of Claude Code will effectively be engaging with a different technology than someone who attempted to use the free Claude to plan an event six months prior. These two groups are not communicating effectively.
So where does this leave us? I believe there are two realities. Yes, AI is significantly advanced beyond what many realize. And yes, it continues to falter on many issues that concern many (and it may remain that way). Anyone making predictions about the future on either side should keep this in consideration.