Loading market data...

Anthropic's Claude Mythos AI Aided macOS Kernel Exploit for M5, Calif Claims

Anthropic's Claude Mythos AI Aided macOS Kernel Exploit for M5, Calif Claims

Researchers used a preview version of Anthropic's Claude Mythos AI to help develop a macOS kernel exploit targeting Apple's M5 system, security startup Calif claimed this week. The development offers a concrete example of AI being used to craft a low-level operating system attack.

The Claim and What It Involves

Calif, a security startup, said the researchers employed the Claude Mythos AI preview to assist in building the exploit. The exploit targets the kernel of macOS on Apple's M5 chip, the company's latest custom processor. Kernel exploits are among the most severe vulnerability types because they can give an attacker full control over the operating system, bypassing many security layers.

Calif did not release technical details of the exploit or specify which version of macOS was targeted. The researchers' identities and affiliations were also not disclosed. It's unclear whether the exploit has been tested in a live environment or remains theoretical.

Why the M5 Is a Significant Target

The M5 is Apple's newest system-on-a-chip, succeeding the M4. It powers the company's high-end Mac lineup. A successful kernel exploit on this platform would allow an attacker to gain persistent, low-level access to the device. That level of access can be used to install malware, steal data, or spy on users without detection.

Using AI to help create such an exploit could lower the barrier for attackers. Traditional kernel exploit development requires deep expertise and a lot of manual work. An AI assistant like Claude Mythos might help researchers identify vulnerabilities faster or write parts of the exploit code.

AI's Role in Attack Development

Anthropic's Claude Mythos AI is a large language model designed for complex reasoning and code generation. In this case, it was used in a preview form. The exact extent of the AI's contribution is unknown. Calif did not say how much of the exploit was AI-generated versus human-written, nor whether the code has been shared with Apple for a patch.

The claim arrives amid broader discussions about the dual-use nature of advanced AI. While these models can help defenders analyze threats, they can also be turned toward offensive purposes. Publicly documented cases of AI-assisted exploit development remain rare, making this one notable.

Unanswered Questions

Apple has not commented on the claim. The exploit code has not been made public, and no independent verification has emerged. Calif did not provide a timeline for releasing more details or for disclosing the vulnerability to Apple.

For now, security teams are left weighing the implications. If the exploit is real, it suggests that AI can accelerate the creation of dangerous tools. If it's not, the claim still highlights a growing area of concern. Either way, the intersection of AI and offensive security isn't going away — and this case shows it's already here.