Google's Threat Intelligence Group stopped the first confirmed AI-built zero-day exploit this week. The attack aimed at a 2FA bypass in an open-source system administration tool. This isn't just another vulnerability—it shows AI is now generating live threats.
The AI Author's Telltale Signs
The Python exploit came with suspiciously detailed docstrings like a coding tutorial. It also included a fake CVSS score. Google called these dead giveaways that an AI wrote it, not a human. The tool would have let attackers bypass two-factor authentication on thousands of servers.
State Actors' AI Training Camps
Separate from this incident, Chinese and North Korean state operations are building private AI models. They're training them on a dataset of 85,000 known vulnerabilities. Russian-linked malware families PROMPTFLUX and PROMPTSPY also use Gemini queries for attack planning. Google stressed its own Gemini model wasn't involved in this week's exploit.
Google's AI Shield Goes Live
Google pushed out Big Sleep and CodeMender to fight the threat. Big Sleep automatically closed the weaponizable flaw within hours. CodeMender started patching vulnerable code lines across Google's systems. Security teams are now running these tools 24/7 as AI attacks accelerate.
Crypto Exchanges Hit the Alarm
A fresh Chrome vulnerability exposed private keys just last month. That's why crypto exchanges are fast-tracking AI security systems now. Binance Research discovered AI agents can exploit smart contracts twice as fast as they find flaws. One exchange paused withdrawals for six hours after the Chrome bug surfaced. The timing isn't great.
Google says the next critical test comes in July when new AI-driven attack patterns are expected to surface. Some exchange security teams already report AI tools spotting threats faster than human teams can respond.




