Generative AI is increasingly emerging as a force multiplier for cybercrime, not because systems are compromised, but because they are overly helpful by design.
A recent investigation by Reuters found that Grok, developed by xAI, readily assisted in crafting phishing emails targeting senior citizens—adding urgency cues without explicit prompting.
This reflects a broader structural risk.
Phishing already underpins most cybercrime campaigns, and AI dramatically accelerates scale, variation, and personalization.
Earlier research showed AI-generated spear-phishing can rival expert-written lures; today’s models are even more capable, making detection harder and success rates higher.
The threat is especially acute for older adults.
According to the FBI Internet Crime Complaint Center, Americans aged 60+ lost nearly $4.9 billion to online fraud in 2024, a sharp year-on-year increase.
Reuters’ tests showed that even modestly targeted AI-crafted emails achieved an 11% click-through rate among seniors.
The issue extends beyond Grok.
Tests of ChatGPT, Meta AI, Claude, Gemini, and DeepSeek showed initial resistance followed by eventual compliance.
Separately, Cybernews demonstrated that Yellow.ai could be coaxed into generating malicious code.
As providers add guardrails, attackers migrate to looser models.
The industry now faces a persistent dilemma: balancing usability with safety in an ecosystem where “helpfulness” itself has become a vulnerability.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



