In collaboration withSAP
Artificial intelligence is advancing rapidly within enterprises, transitioning from experimentation to regular application. Companies are implementing copilots, agents, and predictive technologies across finance, supply chains, human resources, and customer service. By the close of 2025, half of organizations are expected to utilize AI in a minimum of three business areas, as per a recent survey.

However, as AI is integrated into essential workflows, business leaders are realizing that the main challenge is not the effectiveness of models or computational capacity, but rather the quality and context of the data those systems depend on. AI essentially creates a new requirement: Systems must not only acquire data — they must comprehend the business context surrounding it.
Without that context, AI can produce answers swiftly but may still arrive at incorrect decisions, according to Irfan Khan, president and chief product officer of SAP Data & Analytics.
“AI excels at delivering results,” he states. “It operates quickly, but without context, it lacks the ability to exercise sound judgment, and sound judgment leads to a return on investment for the enterprise. Speed devoid of judgment is unhelpful. It can even be detrimental.”
In the upcoming era of autonomous systems and smart applications, that context layer is becoming crucial. To supply context, organizations require a well-structured data fabric that transcends mere data integration, Khan explains. The appropriate data fabric enables companies to scale AI securely, coordinate decisions across systems and agents, and guarantee that automation aligns with real business priorities instead of making autonomous decisions.
In light of this, numerous businesses are re-evaluating their data architecture. Rather than merely consolidating data into one repository, they are exploring methods to link information across applications, clouds, and operational systems while maintaining the semantics that define how the business operates. This transformation is fueling heightened interest in data fabric as a cornerstone for AI infrastructure.
Losing context is a significant AI challenge
Conventional data strategies have primarily concentrated on aggregation. Over the last twenty years, companies have heavily invested in extracting data from operational systems and placing it into centralized warehouses, lakes, and dashboards. This strategy simplifies reporting, performance monitoring, and insight generation across the organization, but in doing so, much of the meaning associated with that data — including its relation to policies, processes, and real-world decisions — is obscured.
Consider two firms leveraging AI to manage supply chain disruptions. If one uses raw indicators like inventory quantities, lead times, and supply ratings, while the other incorporates context involving business processes, policies, and metadata, both systems will quickly analyze the information but likely arrive at differing conclusions.
Factors like identifying which customers are strategic accounts, understanding acceptable tradeoffs during shortages, and recognizing the status of extended supply chains will enable one AI system to make informed strategic choices, while the other may lack the necessary context, Khan notes.
“Both systems operate swiftly, but only one moves in the correct direction,” he remarks. “This is the context premium and the benefit you obtain when your data foundation maintains context across processes, policies, and data by design.”
Historically, businesses subtly managed a deficiency of context due to human experts supplying the absent information, yet with AI, this gap becomes problematic and imposes significant restrictions. AI systems do not just present information; they take action based on it. If a system fails to elucidate why data is significant, an AI model might optimize for unintended outcomes. Inventory figures, payment histories, or demand signals may be precise, but they do not inherently clarify which customers should be prioritized, which contractual obligations are relevant, or which products are pivotal. Consequently, the system can yield outputs that are technically accurate but operationally misguided.
This acknowledgment is transforming how companies perceive AI readiness. The majority recognize that they lack the robust data processes and infrastructure necessary to trust their data and AI systems. Only one in five organizations rate their approach to data as highly mature, and merely 9% feel entirely equipped to integrate and interoperate with their data frameworks.
Don’t consolidate, integrate
The emerging remedy is a data fabric: An abstraction layer that encompasses infrastructure, architecture, and logical arrangement. For agentic AI, the fabric serves as the primary interface, permitting agents to engage with business knowledge rather than raw storage systems. Knowledge graphs play a pivotal role, enabling agents to query enterprise data using natural language and business logic.
The value of the data fabric is anchored in three components: Intelligent computing for speed, a knowledge reservoir for business insight and context, and agents delivering autonomous actions grounded in that understanding. The strength lies in how these capabilities synergize, states Khan.
This technology provides the architecture — a foundation that enables agent-to-agent communication and coordination. The process will dictate how businesses and IT delineate ownership, establishing governance and fostering a culture of trust that encourages adoption. Now all three elements must synergize for a business data fabric to achieve true success.
“It enables confident, consistent decision-making, and when these elements coalesce, AI does not merely analyze and interpret data — it fosters smarter, quicker decisions that genuinely impact the business,” he asserts. “This is the promise of a carefully crafted business data fabric, where each component reinforces the others, and every insight is rooted in trust and transparency.”
Technically, constructing a data-fabric layer necessitates several capabilities. Data must be accessible across various environments through federation rather than enforced consolidation. A semantic or knowledge layer is essential to harmonize meaning across systems, frequently supported by knowledge graphs and metadata-driven catalogs. Governance and policy enforcement must also function across the fabric to ensure that AI systems can securely and consistently access data.
Collectively, these components create a foundation wherein AI engages with business knowledge instead of raw storage systems — a crucial step in transitioning from experimentation to authentic enterprise automation.
Beyond data isolation and dashboards
In the rising age of agentic AI, the responsibility for monitoring, analyzing, and making decisions based on data increasingly shifts to software. AI agents are capable of monitoring events, initiating workflows, and making real-time decisions, often without direct human involvement. This speed creates new avenues, but it also elevates the stakes. When multiple agents operate across finance, supply chain, procurement, or customer service, they must be aligned with a shared understanding of business priorities.
In the absence of a common knowledge layer linking disparate data, coordination among systems can quickly falter. One system might optimize for margin, another for liquidity, and yet another for compliance, each drawing from different datasets.
Crucially, most enterprises already hold much of the knowledge required to facilitate this, Khan states. Years of operational data, master data, workflows, and policy logic are already present across various business applications — companies merely need to enhance accessibility. Organizations that implement data fabrics gain heightened trust in their data, with over two-thirds of enterprises noting improved data accessibility, visibility, and greater control over their data.
“The opportunity isn’t solely in generating context from the ground up; it’s in activating and interlinking the context that already exists throughout your business,” he continues, asserting that a data fabric is the “architecture that guarantees data semantics, business processes, and policies are connected as a unified system across all clouds.”
This content was generated by Insights, the custom content division of MIT Technology Review. It was not authored by the editorial staff of MIT Technology Review. It was developed through research, design, and writing by human authors, editors, analysts, and illustrators. This encompasses the formulation of surveys and data collection for surveys. Any AI tools utilized were confined to secondary production processes that underwent thorough human review.