VAR Panchayat
Securing Networks: A Balanced Approach
2017-04-15The process of reducing risk is iterative, after each round of plugging vulnerabilities and mitigating possible impacts, the new risk has to be reassessed and the cycle continues
Over the last two decades, in a hurry to cash in on this accelerating market, software and hardware vendors flooded the market with network and security products full of untested vulnerabilities, and enterprises and governments deployed them with minimal due diligence. When the inevitable exploitation of these security holes started making an appearance, businesses and vendors alike took an ad-hoc approach to security, producing a less-than- satisfactory result. Consequently, we are today saddled with a security apparatus that has been bolted on to networks, rather than built into the technology that runs networks.
Where does all this leave the hapless owners of networks? They have to keep operations moving forward, regardless of technical vulnerabilities and policy confusion. Nowhere is the risk more high, and the need more stark, than in critical national networks. While government bigwigs and corporate head honchos sort out a response to the threat, there is a lot that people operating these missioncritical networks can do to ameliorate the threat, and secure defence networks.
We will not attempt to identify individual threats and responses, but deal with a generic approach to network security, which, we hope, will provide a framework for security and serve as a guide to focus on security resources.
The Methodology
The generic principle of risk assessment remains the same as in any security threat. We begin with a basic definition of risk. Without proof, we shall state that Risk – Likelihood of an attack xDegree of vulnerability x Impact of a successful attack. The loose formula stated above is self-explanatory if one is sure to be attacked, and has a glaring vulnerability, but the impact of the attack is minimal, then the risk is low. Likewise, if an attack is certain and the impact is catastrophic, but there are very few vulnerabilities, the risk is low, etc, etc.
The interesting thing about this definition of risk is that the first-order likelihood of attack is usually outside the control of the organization: externally serving critical networks are almost certain to be attacked, no matter what, while an isolated internal network is a remote cattle ranch and is unlikely to be attacked. Each organization will need to assess the likelihood of an attack, and realize that likelihood is not an independent variable, but varies with vulnerability and impact: if you plug vulnerabilities and mitigate impact, then all but the most determined attackers will be discouraged, thus reducing likelihood. While organizations do not often have direct influence on the firstorder likelihood, they have control over the second order.
The process of reducing the risks is iterative, after each round of plugging vulnerabilities and mitigating possible impacts, the new risk has to be reassessed, and the cycle continues till risk is acceptable, or diminishing returns make further iterations impractical.
Armed with this method, we are now ready to apply them to critical national networks.
Assess Vulnerabilities
This is the first step that operational leaders should take, basically because vulnerabilities are largely independent of impact and likelihood; they are amenable to objective analysis and yield immediate returns. Vulnerabilities may broadly be classified and assessed under the following categories: Technology Vulnerabilities; Human Vulnerabilities; and Policy Vulnerabilities.
Technology vulnerabilities are the easiest to quantify and tackle. Most of
the vulnerabilities arise because software tries to do “cute” things in an effort to make content rich and clever. An example is the movement from Web 1.0 to Web 2.0, where browsers and servers were pumped full of security holes in an effort to make the web experience rich and interesting. It was very difficult to attack boring read-only Web 1.0 pages, but they were safe! The richer the feature set of software, the more the complexity, and the more the vulnerability.
A simple response to security holes would be: “Don't use any Product anywhere”. This approach may not be practical in generic civilian networks, but in critical national networks, where the content is very streamlined, controlled and limited, and the security versus complexity tradeoff can be biased towards security, this is a very practical approach. A more nuanced approach would be to use a very limited feature set of selected products, having disabled or compiled out all but the essential code. Without going into greater detail, here are a set of recommendations.
Technical Factors
The KISS principle (Keep It Simple, Stupid) is the best response to technical vulnerabilities. Not only does it force organizations to look at applications and products as a source of vulnerabilities, but it also saves on resources and operational expenditure by limiting the set of managed entities in the network. As a start,
• Web 1.0 Vs web 2.0: disable active content in websites and browsers
• Choose a browser, then disable all plug-ins and “cute” extras
• Move towards a simple Operating System with basic capabilities
• Simple databases with simple queries, sacrifice performance for security
• Configure or Compile out unused applications and infrastructure, have only what you need and use KISS defines an entire approach to security, and need not be limited to software and systems. It is especially useful and effective in sensitive networks, because a bit of user experience can be sacrificed for security.
People and Process Factors
Increasingly, idiotic and malicious insiders are becoming the single-largest security threat to critical national networks. Idiot-proofing networks is a (not so simple) matter of identifying Murphy areas and plugging them, and making sure that a well-thought through operational policy is vigorously implemented, monitored and continuously updated. Having customized USB ports is an example of plugging Murphy areas: as long as there is a USB socket available, some day, somewhere, some bright spark will shove a stick in it is a glaring Murphy area, and needs to be plugged.
The malicious insider, on the other hand, is a much more difficult vulnerability to plug. Tackling this threat will tax the ingenuity and resources of the organization, especially sensitive government organizations. The measures to plug vulnerabilities exposed by malicious insiders cut through all activities and functions of the organization, from making sure that the morale is high (a vast majority of insider attacks are by disgruntled employees or ex-employees) to making sure that the cost of violating policy is prohibitively high, through to having robust access control and anomaly detection systems. This is too vast a topic to be dealt with here, but we will outline the generic approach that may prove helpful.
• Layering/Splitting of Duties: All critical activity in ICT operations should be divided amongst multiple individuals. The aim should be to make it impossible for one person to make critical changes without active collaboration from another.
• Least Privilege: Accord only the bare minimum privilege required to do his/ her job to each individual, and manage privilege actively as job roles change.
• Extra Monitoring of Users with High Privileges: System Administrators/ Operations personnel need to be monitored specifically for anomalous
behaviour. Over half of total malicious activity, and almost all of highly sophisticated malicious activity, is carried out by people with skills and access privileges.
• Strictly Regulate and Monitor System changes: A well-defined, widely publicized and strictly monitored change management process goes a long way in deterring or detecting malicious activity. • Scrutiny of Employee Online Behaviour: All activity that affects critical system behaviour should be logged in fail-proof and tamper-proof system loggers. Routine and surprise audits of this system log should be performed.
• Block Remote Access: Most nongovernment enterprises provide remote access to employees, and use layered defence to secure the network. For critical national networks, blocking remote access is a viable option, and should be actively considered. Access to network operations terminals should be regulated physically.
• Pre-Attack Behaviour Detection and Monitoring Policy: Most malicious attacks are preceded by pre-attack behaviour, and this needs to be studied and supervisors educated. In civilian networks, this usually begins with the strict background checks during hiring process (in a study, 30 per cent of those who carried out insider attacks on businesses had previous convictions). In critical national networks, perhaps a special security clearance may be sought. A system of monitoring, reporting and investigating disruptive behaviour needs to be worked out.
• Understand your Network: Baseline network behaviour, understand the resource utilization patterns, audit the ports used, traffic patterns, flow patterns, etc. An understanding of normal network behaviour enables quick detection of anomalous behaviour or malicious activity.
• Audit your Applications: A comprehensive list of executables running on systems yields a quick way of detecting new and unauthorized applications running on the system.
Insider threats are an area of intense research, and it can be appreciated that each of the steps above will require in-depth study and analysis to be fully implementable. However, these are very
much doable, and indeed necessary, in order to maintain a degree of security.
Policy Factors
A somewhat more fuzzy area of vulnerability is weak policy – its impact is far more difficult to quantify, it does not lend itself to easy definition, and its dangers are not easily articulated. But it is real and impactful, so much so that in its absence, the risk spirals upwards exponentially over time. A policy defines the emphasis of the organization, the processes that further this emphasis and the sanctions enforced for policy violation therefore, unless there is a clear and articulated security policy, all functions of the organization will not align their efforts towards its furtherance.
A well-formulated security policy incrementally nudges its adherents towards better and better security, continually improving their focus and efficiency. It provides a reference document to which every person can revert in case of uncertainty, and serves as a baseline for detecting, correcting and punishing aberrant behaviour. It enforces uniformity through the organization. It also means that standardized tools and processes can be implemented throughout the organization.
A weak policy, misaligned to the organizations goals, creates vulnerabilities at the macro level. For example, if a policy overtly emphasizes cost saving, then every process and procedure in the organization will err on the side of cost saving, dozens of small decisions will be taken by lower-level functionaries that compromise security in the interest of cost saving, and eventually the cumulative effect of these decisions will kill security.
In conclusion, while threats to defence networks are many and varied, it is quite possible to contain the risk and enjoy the benefits, even in the absence of a coherent doctrine and top-level guiding policy from governments. There is a lot that individual sensitive government organizations can do on their own to keep their wires buzzing.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.