
In collaboration withHPE
The Ryder Cup, nearly 100 years old, showcases an intense golf competition between Europe and the United States. In 2025, roughly 250,000 attendees were present for three days of intense match play on the greens.

From the standpoint of technology and logistics, executing an event of this magnitude is quite challenging. The infrastructure for the Ryder Cup has to support the myriad network users converging at the site (this year at Bethpage Black in Farmingdale, New York) daily.
To tackle this IT challenge, the Ryder Cup partnered with technology leader HPE to devise a central operations hub. The focal point was a platform providing tournament personnel with data visualization to assist operational decisions. This dashboard utilized a high-performance network and private-cloud environment to consolidate and refine insights from various real-time data streams.
This represented a view of what large-scale AI-ready networking entails—a practical stress test with consequences for everything from event oversight to corporate operations. While models and data readiness receive much attention in boardrooms and media, networking serves as a pivotal third component for effective AI deployment, as stated by Jon Green, HPE Networking’s CTO. “Disconnected AI yields limited benefits; a mechanism is required to input and output data for training and inference,” he asserts.
As enterprises progress towards decentralized, real-time AI applications, the networks of the future will need to manage even larger amounts of data at unprecedented speeds. The occurrences at Bethpage Black provide a lesson applicable across various sectors: Networks prepared for inference are crucial in transforming AI’s potential into tangible results.
Preparing a network for AI inference
Over half of organizations are still finding it difficult to operationalize their data pipelines. A recent HPE cross-industry survey involving 1,775 IT leaders revealed that 45% claimed the ability to execute real-time data exchanges for innovation. This marks a significant improvement from last year’s results (only 7% indicated such capabilities in 2024), but there remains work to connect data gathering to immediate decision-making.
The network could be key to closing that gap. The solution will likely hinge on infrastructure design. While conventional enterprise networks are built to handle predictable streams of business applications—like email, web browsing, and file sharing—they are not equipped for the dynamic, high-speed data movement required by AI tasks. Inference particularly relies on fast data transfer between numerous GPUs with supercomputer-level precision.
“A standard, off-the-shelf enterprise network allows some flexibility,” notes Green. “Most won’t realize if an email service is slightly delayed. However, with AI transaction processing, the entire procedure hinges on the last calculation. Thus, any latency or congestion becomes very apparent.”
Networks designed for AI must therefore operate under a different set of performance criteria, including ultra-low latency, lossless throughput, specialized hardware, and scaling adaptability. One significant difference is AI’s distributed nature, impacting the smooth transfer of data.
The Ryder Cup vividly exemplified this new class of networking. During the event, a Connected Intelligence Center was established to gather data from ticket scans, weather conditions, GPS-monitored golf carts, merchandise sales, queue lengths, and network functionality. Meanwhile, 67 AI-equipped cameras were strategically placed around the course. These inputs were processed through an operational intelligence dashboard, granting staff an immediate overview of activities across the venue.
“The tournament presents a significant networking challenge, given the expansive open spaces with uneven crowd distribution,” explains Green. “Crowds tend to congregate around the action, leading to high-density areas filled with spectators and devices, while other zones remain completely vacant.”
To manage this variability, engineers established a two-tiered architecture. Across the vast venue, over 650 WiFi 6E access points, 170 network switches, and 25 user experience sensors collaborated to ensure uninterrupted connectivity and support a private cloud AI cluster for real-time analytics. The front-end system connected cameras, sensors, and access points to capture live video and movement data, while a back-end layer—based at a temporary on-site data center—linked GPUs and servers in a high-speed, low-latency setup, effectively acting as the system’s brain. This configuration facilitated prompt responses on-site along with data gathering that could inform future operational strategies. “AI models were also accessible to the team, capable of analyzing video footage of shots and identifying the most captivating ones,” states Green.
Physical AI and the resurgence of on-premises intelligence
In event management, timing is crucial, and even more so where safety is paramount—such as a self-driving vehicle making an instant decision to speed up or reduce speed.
As industries prepare for the emergence of physical AI, where applications transition from screens to factory settings and urban environments, a growing number of companies are reevaluating their architectures. Instead of relegating data to centralized cloud systems for inference, some are implementing edge-based AI clusters that process information closer to its origin. Data-intensive training may still occur in the cloud, but inference takes place on-site.
This hybrid model is initiating a wave of operational repatriation, as tasks once confined to the cloud return to on-premises systems to enhance speed, security, autonomy, and cost-effectiveness. “We’ve witnessed a migration of IT into the cloud in recent years, but physical AI is one application that we believe will encourage a substantial return to on-prem,” projects Green, citing the example of an AI-enhanced factory setting, where round-trips of sensor data to the cloud would be too sluggish to safely operate automated machines. “By the time the cloud processes the data, the machinery has already moved,” he elaborates.
There is evidence to support Green’s forecast: research by the Enterprise Research Group indicates that 84% of respondents are reassessing their application deployment strategies due to AI’s rise. Market analyses also reflect this transition. IDC predicts the AI infrastructure market will reach $758 billion by 2029.
AI in networking and the future of autonomous infrastructure
The interplay between networking and AI is reciprocal: Contemporary networks enable AI at scale, while AI is concurrently enhancing network intelligence and capabilities.
“Networks are among the most data-rich structures within any organization,” asserts Green. “This positions them as an ideal case for AI. We can evaluate millions of configuration scenarios across various customer ecosystems and discover what genuinely enhances performance and stability.”
HPE, for instance, boasts one of the most extensive network telemetry databases globally, where AI models scrutinize anonymized data sourced from billions of interconnected devices to detect patterns and improve performance over time. The platform processes over a trillion telemetry points daily, permitting it to continually learn from actual conditions.
The concept widely known as AIOps (AI-driven IT operations) is revolutionizing how enterprise networks are managed across various sectors. Presently, AI provides insights as recommendations that administrators can apply with a simple click. In the future, these systems may autonomously test and implement low-risk adjustments.
That long-term vision, Green notes, is termed a “self-driving network”—one capable of handling the repetitive, error-prone tasks that have historically burdened IT teams. “AI isn’t poised to replace the network engineer; rather, it will remove the tedious tasks that hinder their productivity,” he explains. “You’ll be able to say, ‘Please configure 130 switches to resolve this issue,’ and the system will manage it. If a port malfunctions or a connector is incorrectly plugged, AI can identify the problem—and in many instances, correct it autonomously.”
Modern digital strategies increasingly depend on the efficacy of information flow. Whether managing a live event or optimizing a supply chain, the performance of the network now more than ever defines business outcomes. Establishing that foundation today will distinguish those who pilot from those who scale AI.
For further insights, register to view MIT Technology Review’s EmTech AI Salon, featuring HPE.
This material was produced by Insights, the bespoke content division of MIT Technology Review. It was not authored by the editorial team of MIT Technology Review. The content was researched, designed, and composed by human writers, editors, analysts, and illustrators, including the creation of surveys and data gathering. Any AI tools utilized were restricted to secondary production tasks that underwent rigorous human evaluation.