Fraudsters are deploying AI-powered chatbots posing as trusted digital assistants to promote fake cryptocurrency projects, using scripted investment pitches, fabricated branding and urgency tactics to persuade victims into making irreversible crypto payments.
Cybercriminals are increasingly using artificial intelligence to power sophisticated cryptocurrency scams, with fake chatbots now impersonating well-known AI assistants to win victims’ trust and extract funds.
Security researchers recently identified a fraudulent “Google Coin” presale website featuring a chatbot claiming to be Gemini, the AI assistant developed by Google. The chatbot introduced itself as an official representative of the platform and guided visitors through what appeared to be a legitimate investment process. Google, however, has not launched any cryptocurrency.
AI as a persuasive sales tool
The chatbot delivered polished responses, offered projected returns and encouraged users to purchase tokens during a so-called presale phase. When asked about potential profits, it provided specific financial estimates, claiming early investments could multiply several times after listing.
The site mirrored Google’s branding elements, including familiar design cues and logos, creating the illusion of authenticity. Visitors were shown a professional-looking dashboard displaying token sales progress and bonus tiers designed to incentivise larger purchases.
Once users chose to invest, they were directed to send cryptocurrency payments to a designated wallet address. As with most crypto transfers, the payments were irreversible. There was no evidence of a legitimate exchange listing, regulated entity or verifiable company structure behind the offering.
A scalable scam model
Experts warn that AI chatbots are transforming traditional fraud operations. Previously, scammers relied on human operators to engage potential victims through messaging platforms. AI now allows fraudsters to handle hundreds of conversations simultaneously, delivering consistent and convincing responses without fatigue.
The chatbot in this case reportedly avoided direct answers about company registration, regulatory oversight or independent audits. Instead, it repeated scripted assurances about security, transparency and future roadmaps, while deflecting scrutiny.
The broader trend reflects growing misuse of AI tools in financial crime. Fraud monitoring agencies have reported a rise in crypto-related scams, with digital assets remaining attractive to criminals due to the speed and irreversibility of transactions.
Authorities and cybersecurity experts advise users to verify investment claims through official company websites, avoid projects promising guaranteed returns, and treat urgency-driven presales with caution. As AI-driven scams become more advanced, vigilance remains the first line of defence.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



