Hackers Are Using AI Like It’s ChatGPT for Evil

Turns out, state-sponsored hackers have discovered a new favorite toy: artificial intelligence. According to Google’s Threat Intelligence Group (GTIG), cyber goons from Iran, North Korea, China, and Russia are now using tools like Google’s Gemini to supercharge their phishing scams, write better malware, and probably draft more convincing fake emails than your boss.
AI: Now with 100% More Spycraft
Google’s latest AI Threat Tracker spilled the tea—governments are using AI for everything from reconnaissance to malware development. Forget broken English and sketchy “Dear Sir/Madam” intros; these scams now sound like your college roommate asking for a “quick favor.”
- Iran’s APT42 has been spinning up AI-generated fake personas to lure defense targets. Think LinkedIn recruiters meets 007.
- North Korea’s UNC2970 used Gemini to profile industry professionals, right down to their salaries—talk about overachieving espionage interns.
Stealing AI to Build Worse AI
Meanwhile, model extraction attacks are on the rise (that’s hacker-speak for “copying someone else’s AI homework”). One campaign bombarded Gemini with 100,000+ prompts trying to clone its reasoning skills—basically the AI version of peeling off Tesla’s badge and calling your car a “Messla.”
Malware Gets a Makeover
The malware world’s gone full sci-fi too. GTIG found HONESTCUE, a nasty specimen that chats with Gemini’s API to code itself mid-attack. Another, COINBAIT, poses as a crypto exchange because hackers never get tired of stealing Bitcoin from people “just checking their balance.”
And because nothing is sacred, crooks are now using AI chat platforms—yes, including the friendly ones like Gemini, ChatGPT, and even Grok—to host malicious code. They share “how-to” guides that secretly run terminal commands. So, the next time an AI offers to “fix” your Mac, maybe don’t click.
Black Markets and Broken Morals
Underground forums are buzzing with people selling stolen API keys—because apparently, hackers are too lazy to train their own AIs. One toolkit, Xanthorox, promised cutting-edge cyber weaponry… until Google discovered it was just using stolen Gemini access. Oops.
Google: Still Polite, Now Extra Vigilant
Google’s response? Disable everything, shore up defences, and remind everyone they’re still the good guys. GTIG insists no one’s broken the cybersecurity universe yet—but the AI arms race between hackers and defenders is definitely on.
So, in short: bots are phishing better, malware’s brainstorming scripts, and spies are fluent in every language. Welcome to the future—where even hackers outsource to AI.
*Disclaimer: The content in this newsletter is for informational purposes only. We do not provide medical, legal, investment, or professional advice. While we do our best to ensure accuracy, some details may evolve over time or be based on third-party sources. Always do your own research and consult professionals before making decisions based on what you read here.