Security
Proofpoint finds India ahead in AI use, but 63% of businesses hit by AI security incidents
2026-04-29
Proofpoint has released its 2026 AI and Human Risk Landscape report 2026 AI and Human Risk Landscape report, which explores the widening gap between how quickly organisations are operationalising AI and how prepared they are to secure and investigate the risks that follow. The global study, which surveyed more than 1,400 security professionals across 12 countries, examines how rapid AI adoption is transforming enterprise collaboration and exposing structural weaknesses in security controls and incident response.
AI is increasingly permeating organisations and is now operational across most functions, with deployments spanning customer support, internal messaging, email workflows, and third-party collaboration. India leads the world in AI adoption, where 94% of organisations have deployed AI assistants beyond the pilot stage, and 88% are actively piloting or rolling out autonomous agents. Yet while organisations are investing in AI tools and controls, many cannot confirm those controls are effective—about one-fourth (26%) are not fully confident their AI security controls would detect a compromised AI, and more than three in five (63%) with controls in place have already experienced a confirmed or suspected AI-related incident.
Further, most organisations globally report they are not fully prepared to investigate AI-related incidents that span multiple systems and channels. Confidence in India is stronger, where about 57% say they are fully prepared to investigate one.
“India is leading globally in enterprise AI adoption, but this year’s findings point to a growing gap between rapid AI adoption and security readiness,” said Bikramdeep Singh, India Country Manager, Proofpoint. “While many organisations in India have AI security measures in place, 26% are still not fully confident those controls would detect compromised AI. As AI becomes more deeply embedded in day-to-day workflows—powering assistants and autonomous actions across customers, partners and critical business processes—organisations must ensure they can validate their protections and effectively investigate threats that span multiple collaboration channels. The organisations that will scale AI successfully will be those that build unified visibility and stronger control across the environments where people and AI interact.”
Key India findings from Proofpoint’s 2026 AI and Human Risk Landscape report include:
· AI Deployment Has Outpaced Security Readiness. AI adoption has moved into production faster than governance frameworks have matured. India leads the world in AI adoption, with 94% of organisations deploying assistants beyond pilot stage and 88% advancing autonomous agents. Yet over one-third (35%) describe security as catching up, inconsistent or reactive. More than three in five (63%) Indian organisations report experiencing a suspicious or confirmed AI-related incident, indicating that exposure is already present in live environments.
· Collaboration Channels Are the Primary AI Attack Surface. AI is expanding the attack surface, enabling threats to spread at machine speed and impact connected workflows. While email remains the most common threat vector at 70% in India, exposure now extends across SaaS and cloud applications (59%), SMS or text (55%), and collaboration tools, e.g. Teams or Slack (54%). Among organisations that experienced an AI-related incident, exposure increases across every channel, 73% in email, and 65% on third-party SaaS and cloud applications.
· Confidence Exceeds Control Effectiveness. While many organisations in India have security controls in place, they also lack assurance. More than three in five (63%) organisations in India report having AI security coverage in place, yet about one-fourth (26%) are not fully confident those controls would detect compromised AI. Further, more than two-thirds (69%) of organisations with controls still reported an AI-related incident. In India, gaps persist in visibility into AI or agent activity (57%), training (50%), and governance alignment across teams (41%).
· Investigation Readiness Lags Behind Incident Reality. When AI-related incidents occur, many organisations in India struggle to investigate them effectively. Only 57% of respondents say they are fully prepared to investigate an AI- or agent-related incident, and close to half (47%) report difficulty correlating threats across channels. As AI-related activity spans email, collaboration platforms and cloud systems, the ability to reconstruct events depends on visibility across connected environments, which many organisations do not yet have.
· Tool Sprawl is a Structural Barrier. Fragmentation across security stacks is compounding the challenge, limiting visibility and slowing response when incidents move across systems at machine speed. Almost all (96%) organisations in India say managing multiple security tools is at least moderately challenging, and 71% describe it as very or extremely difficult. Respondents cite integration challenges (48%), operational cost pressures (47%), and difficulty correlating threats (47%).
· Security Architecture Becomes a Strategic Priority as AI Scales. Close to three in five (57%) organisations in India are actively pursuing vendor and tool consolidation, and 62% believe a unified platform is more effective than point solutions. Over the next 12 months, more than two-thirds (67%) plan to expand AI protections, 71% intend to extend collaboration channel coverage, and 58% expect to move toward a unified platform approach.
"While AI has introduced new risks, such as prompt engineering, its bigger impact has been amplifying the risks we've always had," said Ryan Kalember, Chief Strategy Officer at Proofpoint. "Running untrusted code, mishandling sensitive data, and losing control of credentials are the same challenges that humans have created for decades. AI executes them at machine speed and scale. When organisations hand AI the keys to act on their behalf—across customers, partners, and internal systems—the blast radius of any one of those failures grows dramatically. The answer isn't to treat AI as a novel threat category, but to apply rigorous, proven controls to what AI touches, what it runs, and what it's allowed to authenticate as. Organisations that get that foundation right early will scale AI confidently. Those that don't are just automating their own exposure."
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.




