
ADV (DR.) PRASHANT MALI
PH.D. IN CYBER LAW, PRACTICING LAWYER, BOMBAY HIGH COURT
“AI has no feelings, and that’s precisely what makes it both powerful and dangerous. It no longer just supports cybersecurity—it has become a weapon. From deepfake frauds and AI-generated malware to autonomous phishing and synthetic identities, AI is now automating the entire attack chain. Earlier, a human actor had to deploy and negotiate ransomware attacks, but today AI handles everything—from multilingual phishing emails to morphing digital signatures. CISOs are facing a battlefield that has evolved from compliance to combat. If you don’t understand the legal framework—DPDPA, Section 66 of the IT Act, or the emerging concept of 'liar’s dividend' where truth itself can be denied as AI-generated—you’re not just at risk, you’re exposed.
The challenge now is that AI-generated threats are not just technological—they're legal, evidentiary, and organizational. Courts struggle to verify deepfake videos or AI-generated documents because the chain of custody is easily broken, and forensic reports are often inadmissible. AI responses are dynamic and person-specific—what an AI chatbot tells one user could differ for another. So when evidence varies by user or time, how can it stand in court? We still lack table-screen setups, SOPs for AI evidence verification, or proper electronic evidence management frameworks. Even in major breaches, like the Cosmos Bank heist, evidence gets contaminated because there's no clear policy on who collects it first. Without formal AI audit trails, signed forensic reports, or awareness in lower judiciary, we risk undermining justice in AI-fuelled cybercrimes.
CISOs must now prioritize AI-specific clauses in vendor contracts, employee conduct codes, and incident response plans. Boardrooms must wake up—not just to AI’s investment hype but to its governance risk. Deepfake detection should be an annual IT budget mandate, and AI threat modelling must become part of security drills. Embed AI in IS audits, mandate vendor AI declarations, and preserve the chain of custody for all AI evidence. Because in a world where AI can impersonate voices, spoof identities, and even manipulate truth, organizations must move beyond slogans like ‘zero trust’ and prove their cybersecurity maturity with real, defensible documentation. AI is not just a tool—it’s a force. Use it wisely or risk being outmanoeuvred by it.”
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.