May I pose a question: How do you perceive AI at this moment? Are you still thrilled? When you find out that OpenAI or Google has unveiled a new model, do you still feel that excitement? Or has the novelty faded a bit? Go on, you can be candid with me.
Honestly, I feel somewhat foolish even bringing it up, like a pampered child who has received too many gifts at Christmas. AI is astonishing. It ranks among the most significant technologies to emerge in recent decades (despite its many, many shortcomings and, well, problems).
Yet, I can’t shake off the feeling of: Is that all?
If you resonate with this sentiment, there’s valid reasoning behind it: The excitement we’ve been sold over the last few years has been immense. We were promised AI would tackle climate change. That it would achieve human-level intelligence. That it would result in our working days being over!
Instead, we received lackluster AI, chatbot madness, and tools that insistently encourage you to enhance your email newsletters. Maybe we got what we deserved. Or perhaps it’s time to reassess the purpose of AI.
This is the truth at the core of a fresh series of articles, released today, entitled Hype Correction. We acknowledge that AI remains the hottest attraction out there, but it’s time we adjust our expectations.
As my colleague Will Douglas Heaven states in the introduction to the package, “You can’t help but think: When the element of surprise is gone, what’s left? How will we perceive this technology a year or five from now? Will we consider it worth the huge costs, both financially and environmentally?”
In other articles within the package, James O’Donnell examines Sam Altman, the premier AI hype promoter, through the lens of his own words. Meanwhile, Alex Heath details the AI bubble, clarifying what it all signifies and what we should monitor.
Michelle Kim evaluates one of the major assertions in the AI hype cycle: that AI would fully eradicate the need for certain job categories. If ChatGPT can succeed at the bar exam, does that not imply it will take over the role of lawyers? Well, not right now, and possibly not ever.
In a similar vein, Edd Gent addresses the significant question surrounding AI coding. Is it truly as impressive as it appears? It seems the verdict is still pending. Additionally, David Rotman explores the real-world tasks that must be accomplished before AI materials discovery can reach its pivotal ChatGPT breakthrough.
Meanwhile, Garrison Lovely engages with some of the leading figures in the AI safety arena and inquires: Are the pessimists still doing fine? I mean, now that people seem a bit less anxious about their dire fate from ultra-intelligent AI? And Margaret Mitchell reminds us that the excitement surrounding generative AI can obscure the AI advancements we should genuinely acknowledge.
Let’s not forget: AI existed prior to ChatGPT and will persist after. This cycle of excitement has been intense, and we’re uncertain of its long-term effects. But AI isn’t fading away. We shouldn’t be surprised that the dreams we were promised have not yet materialized.
The more probable narrative is that the genuine successes, the impressive applications, are on the horizon. A significant amount of capital is being wagered on that expectation. So yes: The excitement could never be maintained sustainably over the short term. Where we currently stand may represent the beginning of a post-hype era. Ideally, this correction in hype will recalibrate our expectations.
Let’s all take a moment to breathe, shall we?