Binance, the world’s largest crypto exchange by volume, said its artificial-intelligence defenses stopped $10.5 billion in attempted fraud over a 15-month period. The exchange deployed more than 100 AI models to counter a rising tide of scams that themselves use AI tools — from deepfake impersonations to automated phishing campaigns.
Why the number matters
That $10.5 billion figure covers roughly the period from early 2025 through April 2026. It’s a big number — bigger than the market cap of most altcoins — but it’s also self-reported. Binance didn’t break down how much of that blocked sum was attempted theft from user accounts versus fraudulent deposits or wash trading. Still, the scale suggests the problem isn’t small. A single exchange blocking ten figures’ worth of fraud in just over a year says something about how aggressive scammers have gotten.
The AI-vs-AI arms race
Binance says it’s using more than 100 separate AI models to spot suspicious behavior. Some watch for unusual login patterns; others analyze transaction flows in real time. The twist is that the fraud attempts themselves are increasingly AI-powered. Scammers generate fake support calls using voice clones, create synthetic identity documents, and automate social-engineering messages at scale. Binance’s approach is essentially fighting fire with fire — machine learning trained to catch machine-learning-generated attacks.
The exchange hasn’t detailed specific model architectures or false-positive rates. But the sheer volume of models suggests a layered defense: one model might flag a transaction, another validates the flag, a third checks for false alarms. For a platform handling millions of daily transactions, that’s a lot of inference compute.
What users actually see
For the average Binance user, the AI defenses are invisible — until they trip a flag. Withdrawals can be delayed, logins challenged, or accounts temporarily frozen. Binance has faced criticism in the past for overly aggressive fraud filters that locked legitimate users out of their funds for hours. The company says its newer models are better at distinguishing real users from attackers, but it hasn’t published error-rate data.
The timing isn’t great for Binance to be touting AI defenses. Regulators in the EU and the U.S. have been circling over compliance issues, and any admission of false positives could fuel arguments that the exchange’s risk controls are either too tight or too loose. For now, Binance is leaning into the narrative that it’s the good guy in the AI scam fight.
What’s next
Binance plans to open-source some of its fraud-detection models later this year, according to internal roadmaps. That would be a major shift for an exchange that has historically kept its security stack proprietary. Whether regulators buy the $10.5 billion figure — or demand independent audits — is the open question. The exchange has said it will publish a detailed methodology paper in the coming weeks.




