Lost Your Edge? Getting Real Visibility For Monitoring Virtual Environments

By   ISBuzz Team
Writer , Information Security Buzz | Nov 12, 2016 07:28 pm PST

Areg Alimian Ixia’s senior director solutions marketing, looks at best practices in eliminating network blind spots and ensuring reliable, fast and secure business applications

Has your organization’s network lost its edge?  It is certainly likely to be happening. Virtualization, cloud migrations, the IoT, and more mobile devices connecting to the network, are all stretching and distorting companies’ network edges, to the point where it is easy to lose visibility of where those edges are – and what lies beyond them.  The situation is further complicated by the fact that IT infrastructures are rarely moved to the cloud in a single process:  budgetary constraints, security and performance concerns mean that enterprises are implementing hybrid models, sending a mix of critical and non-business critical workloads outside of their main on-premise and private cloud environments.

These hybrid environments, and the flow of business data between on-premises applications and clouds, increase complexity and make it harder for IT teams to gain the edge-to-edge network visibility they need to identify and predict outages, spot a security breach, or analyze mission-critical application performance issues.  Gartner states that “Lack of visibility proliferates due to increasing use of cloud-based apps, encryption, and general network expansion.”

So the questions for IT decision makers as they look into moving their critical workloads from on-premise data centers into virtualized, software-defined data centers (SDDC), or public clouds, are:

  1. How can we ensure the availability, reliability, and performance of our mission-critical applications?
  2. How do we get relevant critical data to analytics and monitoring tools, regardless of where the applications are?

Moving services to the cloud promises increased agility at a lower cost – but there are risks along the way, and more complexity to manage once you get there.  Public cloud outages can have a severe reputational impact, but even private cloud downtime is not easy to manage.  It can lead to significant operational problems and ultimately a damaged bottom line. The IT team needs to ensure that their hybrid infrastructure is reliable, fast, secure and cost-effective, with continual access to business-critical applications, and real-time monitoring and visibility to catch problems fast.

  Obscured visibility

Yet too many organizations still suffer from a lack of intelligent visibility into virtualized private or public clouds, which leads to increased threat exposure and an inability to monitor and troubleshoot critical events.  Blind spots have become a severe security issue for enterprises and service providers – they prevent at least 75 percent of businesses from knowing that they have suffered a security breach* – and they slow down fixing of outages.

How, then, can IT teams intelligently anticipate and mitigate these security and reliability challenges when migrating mission-critical workloads to the cloud, or when implementing services across a mix of on-premise and cloud infrastructures?  There are six key elements to consider, to ensure resilience and security:

  1. Infrastructure and tenant separation

Private and public cloud service providers who own the virtualized infrastructure host workloads from multiple customers on top of the same shared virtual fabric.  This could potentially increase your attack surface and cause compliance issues, because the infrastructure owner implements its own security analytics and monitoring.  As such, intelligent visibility for data access and distribution is needed to serve both the tenant and the infrastructure separately:  you need your own visibility into workload packet data.

  1. Right data, right tool, right time, right location

Your monitoring tools need to be able to access critical application data across both your virtualized networks and off-site environments. Getting the right data to the right tool, at the right place, and at the right time requires a level of intelligent coupling between security, application analytics tools, and your visibility architecture for pervasive data access, intelligent packet processing, and distribution. This enables a higher level of security intelligence, where your security and analytics tools get access to critical data from any virtualized environment, regardless of where they are located.

  1. Security

A virtualized data center is just like any other segment of your network; if it has not been attacked, it probably will be. Or worse, it may have been attacked and you did not know. The visibility issue is compounded by the lack of advanced security forensics and analytics tools available for private and public cloud environments.  This is why comprehensive, real-time visibility into your hybrid environment is essential:  your security solutions need to collect packets in a tightly segmented and secure environment, in order to protect network tenants from each other and without compromising the security walls between networks.

  1. Elastic scale

Elasticity is an absolutely fundamental characteristic of any hybrid environment – it needs to be able both to stretch as the organization grows, and respond rapidly and flexibly to changes. When designing your virtualized data access and monitoring to address scale, elasticity, and flexibility, you should ask yourself how you can dynamically scale network monitoring in and out, in conjunction with the scale of the underlying systems it is monitoring. You should consider how, once packets are collected, you are going to deliver them automatically to a tool that is also likely to be virtual.

  1. Performance

The primary challenge to managing network and application performance within a virtualized datacenter or cloud is tied to enabling visibility.  In virtualized environments, the data may never traverse a physical switch or network, making monitoring difficult – with implications for application quality of experience.  If a performance problem emerges with a mission-critical application, will you be able to pinpoint where the fault is?  To do so, you need to get the right data to the right performance monitoring tool, at the right time.

  1. Fault tolerance and reliability

Reliability in a hybrid environment is achieved by designing your application such that no instance is a single point of failure. You must also place your application in multiple availability zones and regions. This is all up to you as the application owner; the public or private cloud service provider does not do this for you.  As a result, your application is much more complicated, and the number of VMs in the cloud would typically exceed those in an on-premises deployment. This complexity drives the need for pervasive visibility, providing data access and intelligent packet processing and distribution that is also fault tolerant, highly recoverable from its own failures, and can scale as the service grows.

The common theme across all of these six elements is visibility.  In any virtualized or hybrid environment, no matter how it is architected, there are two critical functions that must be included.  These are complete access to all the data crossing physical and virtual networks and clouds;  and intelligent processing and distribution of this data to analytics and data collection tools.  Together, these two functions eliminate network and security blind spots, which enables outages, threats and performance issues to be identified quickly.

As your network edges are blurred and distorted by hybrid IT environments, opening multiple entry points into your network or cloud infrastructure, you need to monitor those entry points very carefully to identify, locate, isolate, and ultimately manage all traffic of interest. Virtual data access and filtering rules are no longer

based on IP address or a VM instance, but are based on workload attributes and type of traffic. Monitoring takes place based on availability zone, network segment, and security group to ensure availability.

To achieve all this, you need a visibility solution that provides pervasive data access, intelligent packet processing, and distribution to monitoring and analytics tools across your entire network estate, whether on-premise, virtualized or in the cloud.   With this in place, even if your network is losing its edges, you will not lose sight of what really matters:  ensuring your business applications are resilient, fast and secure.

* Verizon DBIR 2016

[su_box title=”About Areg Alimian” style=”noise” box_color=”#336588″][short_info id=’96106′ desc=”true” all=”false”][/su_box]

Recent Posts