Loading market data...

AI-Built Exploit Bypasses Two-Factor Authentication, Google Threat Team Confirms

AI-Built Exploit Bypasses Two-Factor Authentication, Google Threat Team Confirms

Cybercriminals have used an artificial intelligence model to find and weaponize a previously unknown software flaw, Google's threat team confirmed. The AI-built exploit is capable of bypassing two-factor authentication (2FA), a widely recommended security measure.

How the Attack Works

The malicious actors deployed an AI system to scan for a zero-day vulnerability—a bug the software maker didn't know existed. Once the AI identified the weakness, the attackers crafted an exploit that defeats 2FA. That means even users who had enabled an extra layer of protection—like a one-time code from an app or a hardware key—could still be compromised.

Google's Confirmation

Google's threat intelligence team verified the incident, though the company has not disclosed which specific software contained the flaw or which users were affected. The team's statement underscores a growing concern: AI can accelerate the discovery and exploitation of security holes faster than traditional methods.

Two-factor authentication has long been considered a critical defense against account takeovers. This incident shows that no single security measure is foolproof. The AI-generated exploit doesn't just steal a password—it appears to intercept or replay the second factor in real time, effectively neutralizing the added protection.

What Comes Next

Security teams are now racing to understand how the AI model was trained and whether similar attacks are likely on other platforms. The affected software vendor is expected to release a patch soon, but the broader question remains: how do defenders keep up when attackers can weaponize AI this quickly?