
Provided byThoughtworks
This year has showcased a live experiment unfolding within the technology sector, where AI’s software engineering abilities have been tested alongside human technologists. Despite the initial strength of AI in 2025, the shift from vibe coding to what is being referred to as context engineering indicates that while human developers’ roles are adapting, they still remain essential.
The latest edition of the “Thoughtworks Technology Radar” encapsulates this sentiment, reporting on the technologies our teams employ in client projects. Within it, we note the rise of methods and tools crafted to assist teams in more effectively navigating the complexities of context management with LLMs and AI agents.
Comprehensively, there is a distinct indication of the trajectory of software engineering and the broader field of AI. After years of the industry believing that advancements in AI are purely about scale and speed, it’s becoming apparent that what truly matters is the capacity to manage context proficiently.

Vibes, antipatterns, and novel innovations
In February 2025, Andrej Karpathy introduced the term vibe coding, which rapidly gained traction. This trend certainly ignited discussion at Thoughtworks; many among us held reservations. In an April edition of our technology podcast, we voiced our apprehensions and expressed caution regarding the potential evolution of vibe coding.
As anticipated, considering the vague nature of vibe-oriented coding, antipatterns have begun to surface. For example, we’ve observed complacency with AI-generated code featured in the latest volume of the Technology Radar, and it’s noteworthy that early forays into vibe coding revealed a level of complacency regarding the capabilities of AI models — as users sought increased functionality and lengthened prompts, model reliability began to diminish.
Adventuring with generative AI
This factor contributes to the growing fascination with engineering context. We recognize its significance, utilizing coding assistants such as Claude Code and Augment Code. Delivering essential context—or knowledge priming—is vital. It guarantees that outputs are more consistent and dependable, ultimately resulting in superior software that necessitates less effort — minimizing rewrites and potentially enhancing productivity.
When optimally set up, we’ve achieved favorable outcomes from employing generative AI to comprehend legacy codebases. When implemented correctly with the necessary context, it can even assist where full access to source code is not available.
It’s crucial to understand that context involves more than just an abundance of data and details. This lesson has emerged from our experience using generative AI for forward engineering. It may seem counterintuitive, but we’ve discovered that AI performs better when it is distanced from the underlying system — or, in simpler terms, further detached from the specifics of the legacy code. This is due to the broadening of the solution space, which allows us to harness the generative and creative potential of the AI models we employ.
Context is essential in the agentic era
The backdrop for recent changes is the rise of agents and agentic systems — both as products organizations wish to create and as technology they aim to utilize. This has compelled the industry to seriously confront context and shift away from a solely vibe-driven approach.
Indeed, rather than simply executing assigned tasks, agents necessitate considerable human oversight to ensure they can effectively address intricate and variable contexts.
A variety of context-related technologies aim to address this issue, including agents.md, Context7, and Mem0. However, it’s also a matter of methodology. For example, we’ve experienced success with anchoring coding agents to a reference application — essentially offering agents a contextual groundwork. We are also testing the use of teams of coding agents; while this may sound complicated, it actually alleviates some pressure by not obligating a single agent to manage all the extensive layers of context necessary for its success.
Striving for consensus
Ideally, this space will develop as practices and standards become firmly rooted. It would be an oversight not to highlight the significance of the Model Context Protocol, which has established itself as the primary protocol for linking LLMs or agentic AI to contextual sources. Additionally, the agent2agent (A2A) protocol is pioneering the standardization of agent interactions.
It is yet to be determined whether these standards will prevail. However, it is essential to contemplate the daily practices that enable us, as software engineers and technologists, to work harmoniously, even amidst the complexities of dynamic systems. Certainly, AI requires context, but so do we. Approaches like curated shared instructions for software teams may not seem like the most groundbreaking innovation, but they are remarkably effective in facilitating cooperation among teams.
There’s also an important discussion to be had regarding what these developments signify for agile software development. The concept of spec-driven development appears to be gaining momentum, but questions remain on how we can maintain adaptability and flexibility while also establishing robust contextual frameworks and foundational truths for AI systems.
Software engineers can tackle the context challenge
Clearly, 2025 has marked a significant year in the advancement of software engineering as a discipline. The industry must closely observe numerous aspects, but this period is also filled with opportunities. While concerns about AI job automation may linger, the shift in discussion from speed and scale to context places software engineers at the forefront of these developments.
Once more, it will be their responsibility to experiment, collaborate, and learn — the future hinges on it.
This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.