One of the biggest challenges in information security is adapting to change. While you might say this is true in any profession, allow me to explain why it is particularly true in infosec. Security must be adaptable both on a macro level, as with changes to compliance standards like PCI. However, security must also be adaptable on a micro level, as with an individual web application or desktop operating system.
Since so many information security controls implemented as infrastructure (firewalls, intrusion prevention systems, log servers, antivirus and anti malware detection, etc.), adaptability becomes harder as the systems needed for thorough inspection of data in-flight and at-rest proliferate. One of the most rapidly growing types of security technologies are endpoint systems designed to detect behavior that is an indicator of compromise (IOC). These solutions work by monitoring file system and network activity for anything unusual or known to be malicious. Both behavioral and signature based methodologies are used here. However, these types of solutions rely on all systems – desktops and servers, but also mobile devices and other devices in the Internet of Things – having some sort of inspection agent installed.
Since it may not be feasible or practical to install some agent on many endpoints in the network, we also rely on NGFW, IPS, anti-malware, and other systems to monitor network paths into and out of the data center for known-malicious and/or anomalous behavior. The bad guys realize these various controls are in place and are constantly seeking alternate network paths with less monitoring in place. These attackers also know that systems that prioritize performance – such as web applications – will likely have less compensating controls in place. Similarly, outbound network paths such as those for DNS lookups need to be open for most of the infrastructure, and are often minimally inspected.
The rise of network function virtualization (NFV) and its sister technology software defined networking (SDN) has made previously static network paths much more mutable. In the same way, security technology is gaining adaptability via SDN/NFV technologies as well as the expansion of API-driven controls for the various security solutions we’ve mentioned so far. In the past, I’ve written that security is the missing link in SDN. Since then, these security technologies have come a long way in becoming more SDN-ready.
Even absent pure-play SDN, there are SSL decryption solutions as well as network tap solutions providing mechanisms for dynamically inserting or removing multiple security services into a data flow. In this way, the network path can be adapted much more rapidly if a potentially dangerous connection or request enters the data path.
Take for example the high performance web application, such as those in online retail or financial services. For these use cases, performance and page-load times are paramount. However, what if every connection or request from the Internet need not take the same path to the web application? If a source IP address is from a known botnet or anonymizer proxy, or an unusual geolocation, a dynamic security service chain could be enacted for that source address with additional inspection tools in place. Once inspected, that source address can either blocked and shunned for future requests, or added to a temporary whitelist. Of course, other attributes such as HTTP headers, browser capabilities and extensions, and other indicators of a legitimate user could be leveraged to enable tighter or looser security service chains such that only suspicious traffic was subject to inspections (and possible added latency).
By leveraging suspicious criteria to enable policy-based steering, it finally becomes possible to preserve performance for most legitimate traffic, while enabling the tightest security controls for any suspicious traffic. These risk-based policies enable practitioners to align the security inspection mechanisms with the nature of the threat, rather than being forced to a “one-path-fits-all” approach, or worse yet, disabling certain security controls to maintain only minimum compliance standards. To enable these dynamic security-service chains, it’s important to leverage SIEM and other analytic tools – such as performance monitoring and web analytics – to develop policies which accurately identify the risk level of a connection or request. Since the network and security paths are now much more dynamic, these service-chaining decisions must also be logged for audit, compliance, and troubleshooting purposes.
When selecting orchestration and/or traffic steering solutions, there are some key features that should be a requirement. First and foremost, robust and customizable logging facilities at all points in the steering. Whenever a dynamic path decision is made, the decision should be logged with source, destination, and reason. Second, these solutions should be API-driven, which will enable any security service-chaining mechanism to be integrated with other orchestration or SDN tools. Third, the solution should have proven facility for working with third-party systems. After all, the goal of such an architecture to enable the continued and more efficient use of existing security solutions, and ensure the selection of best-of-breed inspection and control mechanisms.
Many organizations are already implementing more dynamic security service-chaining, tailoring their inspection tools and paths based on network and/or application level events. They are seeking the efficiency enabled by adaptability. With the rise of machine-learning, behavioral detection mechanisms, and fluid network paths, even greater efficiencies and better security controls will be possible in the future.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.