
India needs policies that balance AI's immense potential with measures to offset risks without overregulation
The recent explosion in the development of Artificial Intelligence (AI) and Generative AI applications such as ChatGPT is redefining the scope and implications of cybercrimes in India. Change can be unsettling and there are already conversations around ways to check this AI surge with new laws, rules, and protocols. Some fundamental questions are worth pondering – Should we legislate? Can we legislate? If the answer is yes, to what extent and through what mechanisms?
Let us begin with a caveat. I do not purport to know or even hazard a guess as to where the current AI/Machine Learning (ML) technologies might lead. Our visions of the future always tend to be rooted in the fixtures of the present. Time, of course, has its inscrutable ways. AI is not a static field – it is evolving at breakneck speed. Its roots stretch back to the 1950s when early researchers explored symbolic reasoning and rule-based systems.
Over the decades, breakthroughs in ML and the advent of deep learning in the 2010s transformed these early ideas into powerful, practical applications such as image and speech recognition. In recent years, the emergence of GenAI has dramatically reshaped the landscape. Since 2022, models like OpenAI’s GPT series and DALL-E have brought advanced AI capabilities to a broader audience. These tools have become ubiquitous, powering innovative applications in business, governance, and everyday life. This rapid evolution is underscored by hard numbers: a PwC report estimated that AI could add up to $15.7 trillion to the global economy by 2030.
The very factors fuelling AI’s transformative potential – speed, scalability, adaptability, and broad availability – can also augment AI as a facilitator of cybercrimes. AI enables the automation of cybercrime activities, allowing criminals to carry out large-scale attacks without much human intervention. Tasks like phishing, brute-forcing passwords, or scanning for vulnerabilities are automated, reducing the need for human labour. Adaptive malware powered by AI can learn from its environment. It can change its behaviour to avoid detection by security software or adapt to new security measures, making it more persistent and difficult to remove.
Natural Language Processing and ML enable cybercriminals to generate highly personalised phishing emails, mimicking the writing style of legitimate sources. This makes phishing attempts harder to spot. AI-powered brute force and dictionary attacks are significantly faster and more effective than traditional methods. By leveraging pattern recognition and statistical analysis, AI tools can quickly crack complex passwords. Advanced pattern recognition attributes enable malware to analyse social media activity and online behaviour to craft highly targeted, convincing attacks. This could involve impersonating friends or colleagues or learning specific psychological triggers to manipulate individuals.
With AI, cybercriminals can scale their attacks exponentially. AI can handle large volumes of data or simultaneous attacks, such as executing multiple Distributed Denial of Service (DDoS) attacks from a network of bots or automating fake social media accounts to spread malware.
No wonder, the demand for devising new criminal laws for cybercrimes enabled by AI is gaining traction in India. Legislators, thus, face the challenge of drafting rules that protect society while not stifling innovation in a field that barely stops moving.
Curbs slow down innovation
Do AI-enabled cybercrimes deserve special legal treatment? Do we need to tweak the extant provisions of the Information Technology Act or the Indian Penal Code (now the Bharatiya Nyaya Samhita) to deal with crimes using AI technologies? For answers, we need to look at some of the turbocharged cybercrimes with the infusion of AI, such as Deepfakes. Was digital content being manipulated or ‘faked’ earlier? Obviously, yes. Tools such as Photoshop were (and still are) used to create fake content. AI does the job better, much better.
Do we need to create a law that penalises AI-enabled forgery differently from those produced through more conventional means? Or should we define different degrees of penalties depending on how realistic the forgery is? How do we establish these nuances through investigation? Or, should there be fresh legislation outrightly banning AI-enabled algorithms that may be used for committing offences, with the caveat that we can sift such codes from innocuous ones?
Overregulation is a key impediment here. Stringent rules to ensure transparency or safety might inadvertently discourage companies from investing in research and development. For instance, if firms must reveal intricate details about proprietary algorithms, they may scale back on experimentation, thus slowing down breakthroughs.
Andrew Ng, a leading AI expert and entrepreneur, has expressed concern that “if we over-regulate at this early stage, we risk losing the momentum essential for breakthroughs in AI technology.” Sam Altman, CEO of OpenAI, has also raised concerns about a one-size-fits-all approach to AI regulation.
Regulating AI is a balancing act that demands careful oversight and an openness to innovation. AI’s economic, ethical, and technological complexities require a regulatory framework that is flexible, adaptive, and informed by real-world data. As the experiences of thought leaders suggest, heavy-handed regulation at this early stage could hamper the very innovation that promises to drive substantial global benefits. Instead of rigid rules, policymakers might consider flexible frameworks that allow for regular review and adjustment, ensuring that regulations evolve alongside technological advances. International cooperation is also critical; nations can only avoid a fragmented landscape that undermines both safety and competitiveness through a unified approach.
The goal should be to craft policies that balance AI’s tremendous promise with the need to mitigate its risks – policies that are informed by economic data and the real-world experiences of those on the frontlines of technological innovation. By embracing a data-driven, collaborative approach to regulation, society can harness AI’s potential to drive progress while safeguarding the values and ethics that underpin a fair and just society.
(The writer is Director General of Police, Cyber Command, Karnataka)
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.