NVIDIA has updated its Spectrum-X Ethernet platform to integrate the Open MRC protocol, a move aimed at improving how hyperscale data centers handle AI workloads. The company said the change helps optimize network performance for large-scale AI training and inference. OpenAI and Microsoft are already using the technology, according to NVIDIA.
Targeting AI traffic at hyperscale
Open MRC, short for Open Multi-Rail Communication, is designed to reduce congestion and improve bandwidth utilization in networks that run distributed AI tasks. By embedding the protocol directly into Spectrum-X, NVIDIA's Ethernet switches and network adapters can coordinate data flows more efficiently—cutting down on delays that slow down model training.
Hyperscale data centers, where thousands of GPUs work in parallel on a single AI model, often struggle with network bottlenecks. The Open MRC integration aims to solve that by letting multiple network paths be used simultaneously, spreading traffic evenly and avoiding hot spots. NVIDIA's own testing has shown latency reductions and higher throughput in these environments, though the company hasn't released specific figures.
Microsoft and OpenAI among early adopters
Both Microsoft and OpenAI have deployed Spectrum-X with Open MRC in their data centers. For Microsoft, that means Azure's AI infrastructure gets the protocol's efficiency gains. OpenAI, which runs its large language models on massive clusters, benefits from faster interconnects between GPUs.
The adoption by two of the biggest names in AI gives NVIDIA a strong reference case as it pushes Spectrum-X against competitors like Broadcom and Intel's Ethernet solutions. NVIDIA has long dominated the AI accelerator market with its GPUs, but the networking layer is becoming just as critical as the compute hardware.
Why the protocol matters now
AI models are getting larger, and training them across thousands of accelerators demands a network that can keep up. Traditional Ethernet protocols weren't built for the all-to-all communication patterns common in deep learning. Open MRC adapts Ethernet to handle those patterns without requiring custom hardware or proprietary standards.
NVIDIA's approach keeps Spectrum-X compatible with standard Ethernet infrastructure—companies don't need to rip out their existing switches. That lowers the barrier for data center operators looking to upgrade AI performance without a full network overhaul.
OpenAI and Microsoft aren't the only ones watching. Other hyperscale operators, including cloud providers running their own AI services, are likely evaluating the technology. The protocol is open, meaning any vendor can implement it, but NVIDIA's tight integration with its own hardware and software stack gives it a performance edge—for now.
The next step for NVIDIA will be expanding adoption beyond the early customers. The company is expected to showcase further benchmarks and partner announcements at upcoming industry events. For OpenAI and Microsoft, the protocol is already in production, handling real AI workloads.



