Be part of our each day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Be taught Extra
Cerebras Programs, an AI {hardware} startup that has been steadily difficult Nvidia’s dominance within the synthetic intelligence market, introduced Tuesday a big enlargement of its knowledge middle footprint and two main enterprise partnerships that place the corporate to grow to be the main supplier of high-speed AI inference providers.
The corporate will add six new AI knowledge facilities throughout North America and Europe, growing its inference capability twentyfold to over 40 million tokens per second. The enlargement contains services in Dallas, Minneapolis, Oklahoma Metropolis, Montreal, New York, and France, with 85% of the full capability situated in america.
“This yr, our objective is to actually fulfill all of the demand and all the brand new demand we count on will come on-line on account of new fashions like Llama 4 and new DeepSeek fashions,” mentioned James Wang, Director of Product Advertising and marketing at Cerebras, in an interview with VentureBeat. “That is our enormous progress initiative this yr to fulfill nearly limitless demand we’re seeing throughout the board for inference tokens.”
The info middle enlargement represents the corporate’s bold wager that the marketplace for high-speed AI inference — the method the place skilled AI fashions generate outputs for real-world functions — will develop dramatically as corporations search sooner alternate options to GPU-based options from Nvidia.

Strategic partnerships that deliver high-speed AI to builders and monetary analysts
Alongside the infrastructure enlargement, Cerebras introduced partnerships with Hugging Face, the favored AI developer platform, and AlphaSense, a market intelligence platform broadly used within the monetary providers {industry}.
The Hugging Face integration will enable its 5 million builders to entry Cerebras Inference with a single click on, with out having to enroll in Cerebras individually. This represents a significant distribution channel for Cerebras, significantly for builders working with open-source fashions like Llama 3.3 70B.
“Hugging Face is sort of the GitHub of AI and the middle of all open supply AI growth,” Wang defined. “The combination is tremendous good and native. You simply seem of their inference suppliers listing. You simply test the field after which you need to use Cerebras immediately.”
The AlphaSense partnership represents a big enterprise buyer win, with the monetary intelligence platform switching from what Wang described as a “world, high three closed-source AI mannequin vendor” to Cerebras. The corporate, which serves roughly 85% of Fortune 100 corporations, is utilizing Cerebras to speed up its AI-powered search capabilities for market intelligence.
“It is a super buyer win and a really massive contract for us,” Wang mentioned. “We pace them up by 10x so what used to take 5 seconds or longer, principally grow to be instantaneous on Cerebras.”

How Cerebras is successful the race for AI inference pace as reasoning fashions decelerate
Cerebras has been positioning itself as a specialist in high-speed inference, claiming its Wafer-Scale Engine (WSE-3) processor can run AI fashions 10 to 70 occasions sooner than GPU-based options. This pace benefit has grow to be more and more precious as AI fashions evolve towards extra complicated reasoning capabilities.
“Should you take heed to Jensen’s remarks, reasoning is the following massive factor, even in accordance with Nvidia,” Wang mentioned, referring to Nvidia CEO Jensen Huang. “However what he’s not telling you is that reasoning makes the entire thing run 10 occasions slower as a result of the mannequin has to assume and generate a bunch of inner monologue earlier than it offers you the ultimate reply.”
This slowdown creates a chance for Cerebras, whose specialised {hardware} is designed to speed up these extra complicated AI workloads. The corporate has already secured high-profile prospects together with Perplexity AI and Mistral AI, who use Cerebras to energy their AI search and assistant merchandise, respectively.
“We assist Perplexity grow to be the world’s quickest AI search engine. This simply isn’t attainable in any other case,” Wang mentioned. “We assist Mistral obtain the identical feat. Now they’ve a cause for individuals to subscribe to Le Chat Professional, whereas earlier than, your mannequin might be not the identical cutting-edge degree as GPT-4.”

The compelling economics behind Cerebras’ problem to OpenAI and Nvidia
Cerebras is betting that the mixture of pace and price will make its inference providers enticing even to corporations already utilizing main fashions like GPT-4.
Wang identified that Meta’s Llama 3.3 70B, an open-source mannequin that Cerebras has optimized for its {hardware}, now scores the identical on intelligence checks as OpenAI’s GPT-4, whereas costing considerably much less to run.
“Anybody who’s utilizing GPT-4 right now can simply transfer to Llama 3.3 70B as a drop-in alternative,” he defined. “The value for GPT-4 is [about] $4.40 in blended phrases. And Llama 3.3 is like 60 cents. We’re about 60 cents, proper? So that you scale back price by nearly an order of magnitude. And in case you use Cerebras, you improve pace by one other order of magnitude.”
Inside Cerebras’ tornado-proof knowledge facilities constructed for AI resilience
The corporate is making substantial investments in resilient infrastructure as a part of its enlargement. Its Oklahoma Metropolis facility, scheduled to come back on-line in June 2025, is designed to face up to excessive climate occasions.
“Oklahoma, as you realize, is a sort of a twister zone. So this knowledge middle really is rated and designed to be absolutely proof against tornadoes and seismic exercise,” Wang mentioned. “It’s going to face up to the strongest twister ever recorded on report. If that factor simply goes by, this factor will simply maintain sending Llama tokens to builders.”
The Oklahoma Metropolis facility, operated in partnership with Scale Datacenter, will home over 300 Cerebras CS-3 techniques and options triple redundant energy stations and customized water-cooling options particularly designed for Cerebras’ wafer-scale techniques.

From skepticism to market management: How Cerebras is proving its worth
The enlargement and partnerships introduced right now characterize a big milestone for Cerebras, which has been working to show itself in an AI {hardware} market dominated by Nvidia.
“I feel what was cheap skepticism about buyer uptake, perhaps after we first launched, I feel that’s now absolutely put to mattress, simply given the range of logos now we have,” Wang mentioned.
The corporate is focusing on three particular areas the place quick inference gives essentially the most worth: real-time voice and video processing, reasoning fashions, and coding functions.
“Coding is one among these sort of in-between reasoning and common Q&A that takes perhaps 30 seconds to a minute to generate all of the code,” Wang defined. “Pace instantly is proportional to developer productiveness. So having pace there issues.”
By specializing in high-speed inference fairly than competing throughout all AI workloads, Cerebras has discovered a distinct segment the place it could declare management over even the most important cloud suppliers.
“No one typically competes towards AWS and Azure on their scale. We don’t clearly attain full scale like them, however to have the ability to replicate a key section… on the high-speed inference entrance, we can have extra capability than them,” Wang mentioned.
Why Cerebras’ US-centric enlargement issues for AI sovereignty and future workloads
The enlargement comes at a time when the AI {industry} is more and more targeted on inference capabilities, as corporations transfer from experimenting with generative AI to deploying it in manufacturing functions the place pace and cost-efficiency are vital.
With 85% of its inference capability situated in america, Cerebras can also be positioning itself as a key participant in advancing home AI infrastructure at a time when technological sovereignty has grow to be a nationwide precedence.
“Cerebras is turbocharging the way forward for U.S. AI management with unmatched efficiency, scale and effectivity – these new world datacenters will function the spine for the following wave of AI innovation,” mentioned Dhiraj Mallick, COO of Cerebras Programs, within the firm’s announcement.
As reasoning fashions like DeepSeek R1 and OpenAI’s o3 grow to be extra prevalent, the demand for sooner inference options is more likely to develop. These fashions, which might take minutes to generate solutions on conventional {hardware}, function near-instantaneously on Cerebras techniques, in accordance with the corporate.
For technical choice makers evaluating AI infrastructure choices, Cerebras’ enlargement represents a big new different to GPU-based options, significantly for functions the place response time is vital to consumer expertise.
Whether or not the corporate can really problem Nvidia’s dominance within the broader AI {hardware} market stays to be seen, however its concentrate on high-speed inference and substantial infrastructure funding demonstrates a transparent technique to carve out a precious section of the quickly evolving AI panorama.