Coinbase went dark for a stretch of trading hours this week after a cooling failure in an Amazon Web Services data center knocked services offline. The outage blocked account access, delayed balance displays, and left users unable to trade during a volatile session. CEO Brian Armstrong didn't mince words, calling the disruption 'unacceptable' and vowing to re-examine the company's infrastructure choices.
What went wrong in the data center
The root cause was a cooling system failure at an AWS facility Coinbase relies on. Without proper temperature control, the hardware shut down, taking the exchange's trading engine, account portal, and balance updates with it. The company didn't say exactly how long the outage lasted or how many users were affected, but the impact was broad enough to spark frustration on social media and among institutional clients who couldn't execute trades.
Coinbase operates on a mix of cloud and co-located infrastructure. The AWS failure exposed a single point of vulnerability: when one data center's cooling goes, so does access to the platform. The company has used AWS for years as part of a strategy that prioritizes speed and global reach, but this incident showed the trade-offs in relying on a single cloud provider's data center for critical operations.
Armstrong's response and the internal review
In a post on X (formerly Twitter), Armstrong said the outage was 'not acceptable' and that the team is 'reviewing infrastructure tradeoffs' to prevent a repeat. He didn't promise a specific fix or timeline, but the message signaled a shift in tone for a company that has often emphasized uptime and reliability in its marketing.
The CEO's commitment to reassess the balance between speed, co-location strategies, and recovery protocols suggests Coinbase may look at adding redundancy across multiple data centers or rethinking its reliance on a single cloud region. The company has faced outages before — a pattern that has frustrated customers who expect a bank-like level of availability from a major exchange.
What's on the table for change
Coinbase is now weighing several options. One is to spread its infrastructure across more AWS availability zones or even multiple cloud providers, which would reduce the blast radius of a single cooling failure. Another is to invest in on-premises or co-located backup systems that can take over instantly when the cloud falters. The company also plans to revisit its recovery protocols — the steps it takes to bring services back online when something breaks.
Those changes won't be cheap or fast. Adding redundancy means more hardware, more complex networking, and more testing. But Armstrong's public acknowledgment of the problem suggests he's willing to spend what it takes to avoid another 'unacceptable' headline.
The question now is whether Coinbase will actually follow through. The company has said similar things after past outages, and users will be watching for concrete changes — not just promises. A formal postmortem and a list of planned upgrades are expected in the coming weeks.




