Google has confirmed that attackers used artificial intelligence to craft a zero-day exploit capable of bypassing two-factor authentication. The disclosure, made by the company's security team, marks one of the first documented cases where AI was directly employed to build a previously unknown vulnerability.
The AI-driven attack
Investigators said the exploit targeted a flaw in an undisclosed system. Unlike traditional hacking methods, the attackers appear to have leveraged machine learning models to generate code that evaded existing defenses and tricked 2FA mechanisms. Google did not specify which authentication products were affected or how many users may have been impacted.
Two-factor authentication is widely considered a critical layer of security. By requiring a second verification step — often a code sent to a phone or generated by an app — it is meant to stop intruders who have stolen a password. This exploit rendered that protection useless.
What Google is doing
The company said it has patched the vulnerability and is monitoring for similar attempts. It did not release technical details, citing the risk of copycat attacks. Security teams inside Google are now analyzing the AI-generated code to understand how the model was trained and what data it used.
The confirmation comes as the security industry grapples with the rapid evolution of AI-driven threats. While defenders have used machine learning for years, this incident shows attackers are adopting the same tools to find and exploit weaknesses faster than before.
Unanswered questions
It is not known whether the hackers were state-sponsored or part of a criminal group. Google declined to say when the exploit was discovered or how long it had been active. The company also did not say whether any customer data was stolen.
The case raises a broader question: how many similar AI-generated exploits are already out there, undetected? For now, security teams are left to play catch-up with a threat that can evolve at machine speed.



