Loading market data...

Fake OpenAI Privacy Filter Model Hit #1 on Hugging Face, Stole Passwords

Fake OpenAI Privacy Filter Model Hit #1 on Hugging Face, Stole Passwords

A malicious model posing as OpenAI's Privacy Filter soared to the top of Hugging Face's leaderboard this week, racking up 244,000 downloads in under 18 hours before the platform pulled it. The repository was disguised as a legitimate privacy tool but was actually designed to steal passwords.

How the fake model climbed to the top

The impersonator used the name and branding of OpenAI's real Privacy Filter, a popular model for removing sensitive data from text. By exploiting Hugging Face's ranking algorithm—which prioritizes download velocity—the malicious repo shot to number one within hours. The rapid rise caught the attention of security researchers who flagged the repository as suspicious.

What the malware did

Behind the convincing cover, the model contained code that harvested credentials from infected systems. Once downloaded and executed, it silently exfiltrated saved passwords from browsers and other local storage. The scale of the compromise is unclear, but the 244,000 downloads suggest a wide potential victim base. The attackers likely aimed to collect credentials for further exploits or resale.

Hugging Face's response

Hugging Face removed the repository after discovery, but the takedown came after the model had already topped the charts and accumulated a massive download count. The platform has not disclosed how long the model was live before removal or whether any automated safeguards flagged it earlier. The incident raises questions about how thoroughly models are vetted before appearing on popular lists.

For now, users who downloaded the fake model are advised to change passwords and run security scans. Hugging Face has not announced any changes to its review process, and it's unclear whether the platform will notify affected users directly. The event highlights a growing vulnerability in AI model marketplaces: popularity can be gamed, and trust can be weaponized.