A new study led by the University of Cambridge suggests artificial intelligence isn't giving hackers any superpowers. Instead, the biggest impact so far has been helping them churn out blog spam. The findings challenge the narrative that AI is dramatically lowering the barrier for sophisticated cyberattacks.
What the Study Found
Researchers analyzed how cybercriminals actually use AI tools. Their conclusion: it hasn't turned average hackers into elite ones. The most common use observed was generating text for spam blogs and phishing emails. The study looked at real-world examples rather than theoretical scenarios.
The Blog Spam Problem
AI-generated blog spam is a growing nuisance. But it's not the kind of threat that keeps security experts up at night. The spam is often generic and easily filtered by modern systems. Still, it adds noise. For security teams, that's a low-level annoyance, not a crisis.
Why the Fear May Be Overblown
There's been widespread worry that AI could enable new types of attacks. The Cambridge-led research suggests those fears are premature. Hackers still rely on known vulnerabilities and social engineering. AI, for now, is more of a productivity tool for mundane tasks. It's not creating superhackers out of script kiddies.
The study doesn't let defenders off the hook. Cybercriminals will keep finding new angles. But it does provide a reality check. The question remains whether this will change as AI models improve. For now, the greatest threat from AI in hacking might be a clogged inbox.




