Loading market data...

OpenAI Debuts GPT-5.3-Codex-Spark on Cerebras Hardware

OpenAI Debuts GPT-5.3-Codex-Spark on Cerebras Hardware

OpenAI has launched a new artificial intelligence model, GPT-5.3-Codex-Spark, designed to help developers write and debug code. The model runs on hardware from Cerebras Systems, the chip company known for its wafer-scale processors. It's the first public release from OpenAI to use Cerebras chips.

A Code-Focused Model

GPT-5.3-Codex-Spark is the latest in OpenAI's line of code-generation models, building on the earlier Codex series. It's trained to understand and generate source code across multiple programming languages. The name suggests a focus on speed—"Spark"—and the version number points to incremental improvements over earlier GPT-5 derivatives. Exact benchmark scores or performance data haven't been released.

Why Cerebras Hardware Matters

Cerebras designs the CS-2 system, a single wafer-scale chip packed with hundreds of thousands of processing cores. Running GPT-5.3-Codex-Spark on this hardware could mean lower latency and higher throughput for code generation compared to traditional GPU clusters. Cerebras has worked with other AI labs to accelerate large language models before, but this is a notable partnership for both companies. Whether the model runs on Cerebras's cloud service or on-premise systems hasn't been disclosed.

The Hardware Landscape

The deal highlights the competition among chipmakers to power AI workloads. Nvidia dominates the market with its GPUs, but Cerebras, along with companies like Graphcore and AMD, is pushing alternatives. For OpenAI, using Cerebras hardware could reduce reliance on Nvidia and potentially lower costs. How much of the model's performance relies on Cerebras's unique architecture isn't clear yet.

GPT-5.3-Codex-Spark is available through OpenAI's API. Pricing follows existing API tiers, with no separate rate limits announced.

No outside evaluations of the model have been published so far. Developers who start using it will likely share their own results in the coming weeks.