Presented byIntel
Caregivers of small children encounter various anxieties regarding developmental milestones, spanning from infancy to adulthood. The duration in months it takes for a baby to begin talking or walking is frequently utilized as a standard for wellness, or as a signal for additional assessments required to accurately identify a possible health issue. A parent celebrates the child’s initial steps and then comprehends the transformation when the child can swiftly walk outdoors rather than gradually crawling in a secure indoor environment. All of a sudden, safety, which includes childproofing, is viewed through a new lens and requires a different strategy.

Generative AI reached its early developmental stage between December 2025 and January 2026 with the rollout of no-code platforms from various providers and the launch of OpenClaw, an open-source personal assistant released on GitHub. No longer crawling on the floor—the generative AI infant surged forward, and very few governance frameworks were adequately prepared.
The accountability dilemma: It’s you, not them
Up to this point, governance has concentrated on the risks associated with model outputs with humans adequately involved before crucial decisions were undertaken—like with loan approvals or job applications. The focus was on model behavior, including drift, alignment, data extraction, and poisoning. The pace was dictated by a human interacting with a model in a chatbot style involving plenty of back-and-forth exchanges between the machine and the human.
Currently, with autonomous agents functioning within intricate workflows, the vision and advantages of applied AI necessitate significantly fewer humans in the mix. The objective is to manage a business at machine speed by automating routine tasks that have well-defined architecture and decision-making rules. From a liability perspective, there should be no reduction in enterprise or business risk between a machine handling a workflow and a human doing so. CX Today summarizes the circumstance succinctly: “AI performs the tasks, humans bear the risk,” and California state law (AB 316), which became enforceable on January 1, 2026, eliminates the “AI did it; I didn’t sanction it” defense. This mirrors parenting where an adult is held accountable for a child’s actions that adversely affect the wider community.
The challenge lies in the fact that without implementing code that enforces operational governance tailored to various levels of risk and liability throughout the workflow, the advantages of autonomous AI agents are mitigated. Historically, governance has been static and aligned with the interaction pace typical of chatbots. However, the nature of autonomous AI inherently removes humans from numerous decisions, influencing governance.
Evaluating permissions
Just as one would not hand a three-year-old a video game console that remotely operates a military tank or a combat drone, allowing a probabilistic system to function without real-time safeguards that can modify essential enterprise data holds substantial risks. For instance, agents that integrate and link actions across various corporate systems may exceed the permissions that a single human user would possess. To progress effectively, governance needs to evolve from policies determined by committees to operational code incorporated into workflows from the outset.
A humorous meme regarding toddlers and toys begins with all the rationalizations for claiming that whatever toy you possess is theirs, concluding with a broken toy that is unequivocally yours. For instance, OpenClaw offered a user experience that felt more like collaborating with a human assistant; however, the enthusiasm waned as security experts realized that inexperienced users could be easily put at risk through its use.
For years, enterprise IT has contended with shadow IT, facing the fact that skilled technical teams must manage and rectify assets they did not design or install, similar to how a toddler returns a broken toy. With the advent of autonomous agents, the stakes amplify: persistent service account credentials, long-lasting API tokens, and authority over critical file systems. To address this issue, it is essential to designate appropriate IT budgets and resources upfront to ensure central discovery, monitoring, and rectification for the thousands of employee or department-generated agents.
Establishing a retirement strategy
Recently, an acquaintance shared that she assisted a client in saving hundreds of thousands of dollars by recognizing and terminating a “zombie project”—a forgotten or failed AI pilot languishing on a GPU cloud instance. There could be thousands of agents at risk of becoming a zombie army within a business. Today, numerous executives urge employees to leverage AI—or else—and employees are instructed to develop their own AI-first workflows or AI assistants. With the functionality of a tool like OpenClaw and directives coming from the top, it’s easy to predict that the number of self-created agents arriving at the office alongside their human counterparts will surge. As an AI agent is a program falling under the definition of company-owned intellectual property, as employees transition to different departments or organizations, those agents could become orphaned. Proactive policies and governance are necessary to deactivate and retire any agents associated with specific employee IDs and permissions.
Financial optimization is governance from the start
While some executives view autonomous AI as a means to enhance their operating margins by reducing human resources, many are discovering that seeking ROI for replacing human labor is a misguided approach. Incorporating AI capabilities into the enterprise does not entail merely acquiring a new software tool with predictable per-instance or per-seat costs. A December 2025 IDC survey sponsored by Data Robot revealed that 96% of organizations deploying generative AI and 92% implementing agentic AI reported that costs exceeded or were significantly higher than anticipated.
The survey distinguishes between governance and ROI, yet as AI systems proliferate across extensive enterprises, financial and liability governance must be embedded within the workflows from the outset. A part of enterprise-level governance stems from accurately forecasting and adhering to designated budgets. Unlike conventional software financial models based on per-seat pricing with associated support and maintenance fees, utilizing AI involves consumption and usage costs that escalate in line with the workflow across the enterprise: more users translate to additional tokens or increased compute time, leading to higher expenses. It’s akin to leaving a tab open or leaving an online retailer’s digital shopping cart accessible on a toddler’s electronic gaming apparatus.
Cloud FinOps operated in a deterministic manner, but generative AI and agentic AI systems built on generative AI are inherently probabilistic. Some AI-centric founders are beginning to realize that a single agent’s token expenses can soar to $100,000 for every session. Without pre-established safeguards, chaining intricate autonomous agents that operate unsupervised for extended durations can easily exceed the budget for hiring a junior developer.
Maintaining human involvement remains essential
The potential of autonomous agentic AI lies in the acceleration of business operations, product launches, customer interactions, and customer loyalty. Transitioning to machine-speed decision-making without human oversight for these critical functions fundamentally alters the governance landscape. Although many principles regarding proactive permissions, discovery, audits, remediation, and financial operations/optimizations remain consistent, their execution must adapt to keep up with autonomous agentic AI.
This content was produced by Intel. It was not written by MIT Technology Review’s editorial team.