
This piece was first published in The Algorithm, our weekly newsletter focused on AI. To receive stories like this directly in your inbox, subscribe here.
In February, I picked up a pamphlet at an anti-AI demonstration in London. I can’t confirm if the authors intended to reference South Park’s underpants gnomes. However, if they did, they hit the mark: “Step 1: Cultivate a digital super intelligence,” it stated. “Step 2: ? Step 3: ?”
Created by Pause AI, a global activist organization that co-hosted the protest, it concluded with this urgent request to the audience: “Pause AI until we clarify what Step 2 is.”
In the South Park episode “Gnomes,” first aired in 1998, Kenny, Kyle, Cartman, and Stan discover a clan of gnomes that sneak out at night to pilfer underpants from drawers. Why? The gnomes unveil their business proposal. “Phase 1: Gather underpants. Phase 2: ? Phase 3: Revenue.”
The gnomes’ business strategy has since become one of the classics of internet memes, employed to parody everything from startup plans to policy suggestions. Memelord supreme Elon Musk once referenced it in a discussion about funding a Mars mission. Currently, it reflects the situation of AI. Companies have developed the technology (Step 1) and promised revolution (Step 3). The path to get there remains a significant uncertainty.
According to Pause AI, Step 2 should entail some form of regulation. However, what it specifically entails and who will implement it are subjects of discussion.
Conversely, AI proponents are convinced that Step 3 signifies deliverance and tend to overlook the intermediate phase. They perceive us racing toward bright horizons on the momentum of an “economically transformative technology,” as OpenAI’s leading scientist, Jakub Pachocki, expressed to me a few weeks back. They have a general idea of where they want to head—mostly: The details are unclear and still somewhat distant. Yet, everyone has a different approach. Will they all succeed? Will anyone?
For every grand assertion about the future, there exists a more realistic evaluation of how actual implementation occurs—one that tempers the enthusiasm. Consider two recent analyses. One, by Anthropic, forecasted which job categories will be most impacted by LLMs. (A key insight: Managers, architects, and media professionals should brace for changes; groundskeepers, construction workers, and those in hospitality, not so much.) However, their forecasts are primarily conjectures, based on the tasks LLMs appear adept at rather than their actual performance in real-world settings.
Another analysis, issued in February by researchers at Mercor, an AI hiring startup, assessed various AI agents powered by leading models from OpenAI, Anthropic, and Google DeepMind on 480 workplace tasks commonly performed by human bankers, consultants, and attorneys. Every agent they evaluated fell short in fulfilling most of its responsibilities.
Why is there such stark divergence in opinions? Several elements contribute. Initially, it’s essential to examine who is making the statements (and their motives). Anthropic has a vested interest. Furthermore, many individuals asserting that a substantial change is imminent have come to that conclusion primarily based on the rapid advancement of AI coding tools. However, not all tasks can be solved solely through coding. Other studies have indicated that LLMs struggle with strategic decision-making, for instance.
Moreover, when they are implemented, these tools are not simply placed into a pristine environment. They must function in spaces filled with people and established processes. Occasionally, incorporating AI may exacerbate issues. Certainly, perhaps those processes need to be dismantled and redesigned around the new technology for it to achieve transformative potential, but this will demand time (and courage).
That significant gap? It exists precisely where Step 2 should be. The lack of consensus on what is forthcoming—and how—creates an information vacuum that is filled by the latest outrageous assertion of the week, evidence notwithstanding. We are so detached from any genuine comprehension of what lies ahead and how it will unfold that a single social media post can (and does) disrupt markets.
We require fewer speculations and more empirical data. However, this will necessitate transparency from the model developers, collaboration between researchers and companies, and innovative methods to assess this technology that inform us of what truly happens when it is deployed in real-world scenarios.
The tech sector (and consequently the global economy) relies on the promised expectation that AI will indeed be transformative. But that is not yet a guaranteed assumption. The next time you encounter bold predictions about the future, keep in mind that most enterprises are still trying to figure out what to do with their underpants.