Google's Threat Intelligence Group says it has high confidence that a threat actor used an artificial intelligence model to discover a zero-day vulnerability in a widely used system administration tool. The exploit was then weaponized to bypass two-factor authentication (2FA), a core security measure that many organizations rely on to protect accounts.
AI-driven discovery and weaponization
The attacker didn't just find the flaw manually. According to Google's threat intelligence team, the AI model was employed to identify the previously unknown vulnerability and later to develop an exploit capable of defeating 2FA. This marks one of the first documented cases where an AI has been directly linked to both the discovery and weaponization of a zero-day in a real-world attack.
Zero-day vulnerabilities are especially dangerous because the software vendor has no knowledge of them, leaving no patch available. By using an AI to hunt for such flaws, the threat actor accelerated a process that traditionally requires extensive manual reverse engineering or luck.
Why system admin tools are prime targets
The vulnerability exists in a tool commonly used by system administrators to manage networks and servers. These utilities typically run with elevated privileges, making them a prized entry point for attackers. Compromising the admin tool allowed the threat actor to bypass authentication mechanisms that depend on the tool's integrity. Google did not name the specific software, citing ongoing investigations.
Tools with deep system access are attractive because they can be used to move laterally across a network, disable security controls, and exfiltrate data. The use of an AI model to find such a vulnerability suggests that attackers are now automating the most difficult part of the process: locating a needle in a haystack of code.
The 2FA bypass in practice
Two-factor authentication is widely considered a strong defense against credential theft. But it's only as secure as the infrastructure beneath it. If an attacker can control the system administration tool that processes or verifies 2FA tokens, they can intercept or nullify the second factor. Google's high-confidence assessment indicates this wasn't a theoretical proof-of-concept — it was a live operation that succeeded.
The attack raises questions about the limits of 2FA. While it remains effective against phishing and password theft, this incident shows that a determined adversary with enough resources can find ways to undermine it at a lower layer.
What's known and what isn't
Google has not disclosed the name of the threat actor, the AI model used, or the affected tool. The lack of detail is common in such cases — full disclosure could tip off other attackers or reveal intelligence-gathering methods. Security teams are left to monitor for unusual behavior in their system administration tools and to enforce strict access controls.
The incident underscores a troubling trend: AI is no longer just a defensive tool. It's being turned against the same systems it was designed to protect. The security community now faces the reality that automated vulnerability discovery may become a standard part of the attacker's toolkit. How many similar AI-found zero-days are already out there, waiting to be used, is an open question.




