ANKIT WASNIK
LEAD SECURITY SOLUTIONS ARCHITECT, QUALYS
“There is a well-known observation by Peter Drucker that you cannot manage what you cannot measure. In today’s context, that idea goes even further: you cannot measure what you cannot see, and you cannot secure what you cannot manage. This has become especially relevant as organizational attack surfaces expand at an unprecedented pace. A few years ago, when defining an enterprise asset, the answer was relatively straightforward-servers, network devices, and endpoints. Today, that definition has broadened significantly. Organizations operate across multiple infrastructure layers and technologies. Large language models (LLMs) introduce their own unique attack surfaces. Cloud environments come with distinct exposure points. Containers, Docker, Kubernetes clusters, digital certificates-each of these components represents an independent attack surface that must be monitored and protected.
To address this growing complexity, organizations deploy numerous security solutions. On average, enterprises now use more than 30 different security tools to safeguard their infrastructure. However, this creates another challenge: each tool measures risk differently. Some assess risk on a scale of 1 to 10, others 1 to 100, and still others 1 to 1000. There is no standardized framework for risk measurement, resulting in fragmented visibility and inconsistent prioritization. At the same time, the role of vulnerability management within overall risk management has evolved dramatically. The days of conducting vulnerability scans once a year—or even once a quarter—are long gone. Today, many organizations perform vulnerability scanning on a near real-time basis. Consequently, vulnerability management programs must mature and adapt at the same pace as the threat landscape. The scale of the challenge is evident in recent data. In 2024 alone, more than 40,000 vulnerabilities were disclosed.
Approximately 39% of these had publicly known exploit exposures worldwide, and over 78% were categorized as high or critical severity. Attempting to remediate every single vulnerability is an overwhelming task. The sheer volume makes it impractical for any organization to address all of them simultaneously, underscoring the need for intelligent prioritization and risk-based remediation strategies.
So what is the current state of cyber security risk management? We have 30 different tools, 30 different dashboards, 30 different reports and 30 different ways to measure this risk. There is no centralized SPM (Security Posture Management). That is where the concept of Risk Operation Center (ROC) comes into play, which is basically where all your SPM data are combined into a unified orchestration solution. Now we all might have heard of the term SOC (Security Operation Center) that most of the organizations deploy. SOC is something where you feed data from all the various tools that you have deployed in the company - be it your firewall, your Active Directory, or your network devices. When something goes wrong, or when an incident happens in the organization, you look at the SOC data to do a post mortem analysis and identify what has gone wrong.”
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



